Blackbox

In 2016, ProPublica investigated an algorithm called COMPAS, used in US courts to predict recidivism. The black box returned a "risk score." ProPublica found it was twice as likely to falsely label Black defendants as future criminals than white defendants. The company that made the algorithm denied the bias. Because the box was black, both sides could claim the math supported them.

Ironically, we call this device the "black box" (it’s actually bright orange). It is the ultimate witness. It swallows a storm of inputs—airspeed, altitude, button presses, screams—and produces a perfectly linear story of cause and effect. blackbox

Because the engineers couldn't ask the AI directly, they had to reverse engineer the data. They discovered that the hospital had a protocol: All asthmatics with pneumonia are immediately sent to the ICU. Therefore, these patients received aggressive, life-saving care immediately. The AI, seeing only the outcome (asthmatics rarely died), concluded that asthma was protective. In 2016, ProPublica investigated an algorithm called COMPAS,

Doctors were baffled. Asthma is a major risk factor for pneumonia complications. Why would the AI do this? Because the box was black, both sides could

We are building minds made of silicon. But because they are black boxes, we are like Zeus watching the forge of Hephaestus: we see the raw ore go in and the thunderbolt come out, but we have no idea how the fire works. We cannot go back. The black box is too powerful. Self-driving cars see things humans miss. Medical AI spots tumors in MRIs that radiologists gloss over.

The machine was brilliant. It was also dangerously stupid. And no one could see the error until a person almost died. This opacity is crashing headlong into Western jurisprudence. The 14th Amendment guarantees "equal protection under the law." But what happens when a judge uses a black box algorithm to set bail? If the algorithm is biased against a zip code, how can a defense attorney cross-examine it?