Notice: Originally published May 31, 2016 in The Los Angeles Review of Books.
One day in late March, Microsoft made a chatbot named Tay. Tay began the day tweeting love to everyone. A few hours later, Tay was quoting Adolf Hitler and offering filthy sex on demand. To borrow a phrase from John Green, Tay fell in love with Hitler and filthy sex the way you fall asleep: slowly, and then all at once. That’s because Tay learned from humans, and humans are awful.
Machine-learning algorithms try to make sense of human activity from the data we generate. Usually these algorithms are invisible to us. We see their output as recommendations about what we should do, or about what should be done to us. Netflix suggests your next TV show. Your car reminds you it’s time for an oil change. Siri tells you about a nearby restaurant. That loan you wanted? You’re approved!
In a sense, you’re making these recommendations yourself. Machine-learning algorithms monitor information about what you do, find patterns in that data, and make informed guesses about what you want to do next. Without you, there’s no data, and there’s nothing for machine learning to learn. But when you provide your data, and when the guesses are correct, machine learning operates invisibly, leaving you to experience life as an endless stream of tiny, satisfying surprises.
Or at least that’s how things could be, according to computer scientist Pedro Domingos. In The Master Algorithm, Domingos envisions an individually optimized future in which our digital better halves learn everything about us, then go out into the world and act for us, thereby freeing us to be our best non-digital selves. In this vision, machine-learning algorithms replace tedious human activities like online shopping, legal filing, and scientific hypothesis testing. Humans feed data to algorithms, and algorithms produce a better world for humans.
It sounds like science fiction. And it is, notably in Charles Stross’s novel Accelerando. But is this future possible?
If you’re skeptical, maybe it’s because you think we’re not capable of creating good enough machine-learning algorithms. Maybe you got a bad Netflix recommendation. Maybe Siri can’t understand your instructions. The technology, you might think, just isn’t very good.
The Master Algorithm seeks to prove you wrong. Over the course of several chapters on the current state of machine-learning research, Domingos explains that we are close to creating a single, universal learning algorithm that can discover all knowledge, if given enough data. And he should know. In a research field dominated by competition, Domingos has long championed a synthetic approach to machine learning: take working components from competing solutions, find a clever way to connect them, then use the resulting algorithm to solve bigger and harder problems. The algorithms are good enough, or soon will be.
No, the problem isn’t the technology. But there are good reasons to be skeptical. The algorithmic future Domingos describes is already here. And frankly, that future is not going very well for most of us.
Take the economy, for example. If Domingos is right, then introducing machine learning into our economic lives should empower each of us to improve our economic standing. All we have to do is feed more data to the machines, and our best choices will be made available to us.
But this has already happened, and economic mobility is actually getting worse. How could this be? It turns out the institutions shaping our economic choices use machine learning to continue shaping our economic choices, but to their benefit, not ours. Giving them more and better data about us merely makes them faster and better at it.
In an article published in Accounting, Organizations and Society, sociologists Marion Fourcade and Kieran Healy use the example of credit scoring. Previously you could get credit based on being a member of a large, low-risk group, such as “management employee” or “Ivy League graduate.” Individually, you could have a bad year, fail to receive a promotion, or express an unpopular political opinion, yet still access credit that would allow you to, say, invest in real estate, just like the most successful person in your low-risk group.
That is no longer true. Now private corporations ingest ever greater amounts of individual data, including not only our financial transactions, but also our Facebook and Twitter posts, our friends and followers, and our voting histories. These data then feed machine-learning algorithms that classify us based on individual risk profiles, empowering corporations to customize personal financial offerings that extract the most profit from each of us. The good news is you’re still able to obtain the same credit as the most successful person in your group. The bad news is that person is you.
More economic options are available than ever before, and to more people. But you’ll never see them. And corporations have better information about everyone than we have about ourselves. Rather than improving individual life-chances, machine learning provides more of us with more ways than ever to make our economic lives worse.
So the future, which is the present, isn’t looking good for humans. What is to be done?
Domingos’s answer is, approximately, learn more about machine learning. The Master Algorithm insists on a politics of data in which hypervigilant data citizens actively manipulate the algorithms that might otherwise constrain them. Since machine learning depends on data, Domingos argues, “your job in a world of intelligent machines is to keep making sure they do what you want, both at the input (setting the goals) and at the output (checking that you got what you asked for).”
This ideal of an informed, active citizen probably sounds familiar. It’s been shopped around a long time, under names like “personal responsibility.” But this ideal is not reality. In reality, we mostly don’t want to know about complex, technical, and consequential processes at all, much less do anything about them.
Take government, for example. In their book Stealth Democracy, political scientists John R. Hibbing and Elizabeth Theiss-Morse report that what most Americans want from government has little to do with parties, policies, or influence. What Americans want is government that makes no demands on them whatsoever. The same pattern holds for Wikipedia, the single most-consulted knowledge resource in the world. Millions of people read Wikipedia articles every day. But only a tiny percentage ever writes or edits those articles. Or consider Open Source software, which powers the vast majority of computer servers and no small number of personal computers, phones, and tablets. Anyone with the necessary technical knowledge and programming skill can review, edit, and add to Open Source software. Few ever do.
Even when it results in outcomes that we don’t particularly like, most of us want processes that work without our intervention. We just put up with those outcomes. Sure, Wikipedia articles sometimes get the facts wrong. Okay, software bugs crash our computers from time to time. Fine, yes, occasionally an algorithmically driven chatbot becomes a Hitler-quoting sex enthusiast. But how often does that happen, really? Not often enough to do anything about it.
By focusing on individual responsibility, Domingos never fully acknowledges the bigger problem here: even if we wanted to do something, it’s not obvious that we could. It’s true, as Domingos claims, that machine-learning algorithms can be manipulated if you control the data. Human miscreants trained Tay to be a dirty, dirty chatbot. But then what? Microsoft turned off the algorithm. They can bring it back, or not, and in different form, or not. And they can block every single person who made their bot naughty from ever participating again.
For those in power, machine learning offers all of the benefits of human knowledge without the attendant dangers of organized human resistance. The Master Algorithm describes a world in which individuals can be managed faster than they can respond. What does this future really look like? It looks like the world we already have, but without recourse.
The only way that Domingos’s vision of the future makes sense is if we press a reset button on the world we have now, eliminating all of the current arrangements and advantages, and starting over. There is precedent. Leviticus 25:8-13 describes a special celebration of liberty called the Year of Jubilee. Every 50 years or so, all property was returned to its original owners, all slaves and prisoners freed, and, at least in theory, the constraints of debts removed. People could start over. They could try again.
So why not a data jubilee? Every few years, delete the data. Start over. Make the machines learn everything anew. Even if we’re not sure what’s gone wrong, we can at least try turning it off and on again.
Michael S. Evans
© Michael S. Evans 2008-2024