Briefly Noted

Things Read, Seen or Heard Elsewhere

No one really know how AI works. Too bad it's going to control more and more of our lives.

An ongoing concern has mutated into a problem: developers and scientists are having a hard time figuring out how the black box AI they’re creating actually works. Or, more specifically, how various systems reach the conclusions that they do.

Via Vice:

The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.

Most AI systems are black box models, which are systems that are viewed only in terms of their inputs and outputs. Scientists do not attempt to decipher the “black box,” or the opaque processes that the system undertakes, as long as they receive the outputs they are looking for. For example, if I gave a black box AI model data about every single ice cream flavor, and demographic data about economic, social, and lifestyle factors for millions of people, it could probably guess what your favorite ice cream flavor is or where your favorite ice cream store is, even if it wasn’t programmed with that intention.

These types of AI systems notoriously have issues because the data they are trained on are often inherently biased, mimicking the racial and gender biases that exist within our society. The haphazard deployment of them leads to situations where, to use just one example, Black people are disproportionately misidentified by facial recognition technology. It becomes difficult to fix these systems in part because their developers often cannot fully explain how they work, which makes accountability difficult. As AI systems become more complex and humans become less able to understand them, AI experts and researchers are warning developers to take a step back and focus more on how and why a system produces certain results than the fact that the system can accurately and rapidly produce them.

As AI is inserted into greater areas of our lives where nuanced decision making should be paramount, we run into serious issues.

Here’s Cory Doctorow with a recent take:

I think that the problems of AI are not its ability to do things well but its ability to do things badly, and our reliance on it nevertheless. So the problem isn’t that AI is going to displace all of our truck drivers. The fact that we’re using AI decision-making at scale to do things like lending, and deciding who is picked for child-protective services, and deciding where police patrols go, and deciding whether or not to use a drone strike to kill someone, because we think they’re a probable terrorist based on a machine-learning algorithm—the fact that AI algorithms don’t work doesn’t make that not dangerous. In fact, it arguably makes it more dangerous. The reason we stick AI in there is not just to lower our wage bill so that, rather than having child-protective-services workers go out and check on all the children who are thought to be in danger, you lay them all off and replace them with an algorithm. That’s part of the impetus. The other impetus is to do it faster—to do it so fast that there isn’t time to have a human in the loop. With no humans in the loop, then you have these systems that are often perceived to be neutral and empirical.

Patrick Ball is a statistician who does good statistical work on human-rights abuses. He’s got a nonprofit called the Human Rights Data Analysis Group. And he calls this “empiricism-washing”—where you take something that is a purely subjective, deeply troubling process, and just encode it in math and declare it to be empirical. If you are someone who wants to discriminate against dark-complexioned people, you can write an algorithm that looks for dark skin. It is math, but it’s practicing racial discrimination.

I think the risk is that we are accelerating the rate at which decision support systems and automated decision systems are operating. We are doing it in a way that obviates any possibility of having humans in the loop. And we are doing it as we are promulgating a narrative that these judgments are more trustworthy than human judgments.

This idea of “empiricism washing” is important. Life is hard. Choices are difficult. People, generally, don’t like conflict. If math provides answers, no matter how problematic, we get to wash our hands of it and not feel the repercussions associated with our decisions. You didn’t get that loan? Sorry, the algorithm says you’re not worthy. A drone bombed your innocent village? Wasn’t our bad, it was the algorithm.

Source: 
Thoughts? Ideas? Comments?
Send me a note or reach out on Mastodon.
Topics: 
Title: 
Date Noted: