In a famous cartoon from a 1993 issue of New Yorker magazine, two dogs are chatting, one sat on a chair behind a computer terminal.
“On the internet, nobody knows you’re a dog,” he tells his canine companion.
The idea, of course, is that all internet users have an equal voice and equal access to information.
But in her excellent “Weapons of Math Destruction”, data scientist and former hedge fund manager Cathy O’Neill argues that the reverse is now the case. We are far from equal. We are ranked, categorised and scored in hundreds of models by powerful algorithms in ways we may struggle to imagine, let alone understand.
Often the output of computer programmes might seem little more than an inconvenience. If we perform a search for a new pair of shoes or a holiday, ads start to appear on the websites we visit, promoting similar products. There’s the urban myth that airline websites raise prices when you make a repeat visit—which, if true, is more annoying.
But what O’Neill describes in her book is a far more pervasive and destructive use of algorithms.
The growth of for-profit colleges in the US, which she describes as “diploma mills underwritten by government-financed loans”, has been driven by predatory targeted advertising, she shows, aided and abetted by the internet giants.
Corinthian Colleges, a company which filed for bankruptcy in the US last year, spent $120 million on marketing annually, much of it to generate and pursue 2.4 million potential leads. This led to sixty thousand new students and $600 million a year in annual revenue, O’Neill notes.
Via ads driven by secret algorithms, companies like Google and Facebook helped the colleges target people in particular geographical locations highlighted as low-income. Those who had taken out high-interest loans or fought abroad (in the US, war veterans can easily obtain government funding for tuition) moved even higher up the list of potential victims.
Sometimes “lead generators” working on commission would even post fictitious job ads on the net, purely for the purpose of harvesting personal details. The details could then be sold to the colleges’ recruitment teams, who would pay $85 for each potential target. Some of those unwittingly snared in this way would then receive up to 120 cold calls a month.
Disgraceful? Yes. Illegal—not yet, at least in the US. But O’Neill makes a convincing case that the misuse of algorithms is reinforcing social inequalities and, worse, subverting democracy on a worldwide basis.
Far from fighting political campaigns based on a single published manifesto, parties or candidates seeking election now place targeted messages with particular groups of voters, often hiding them from the broader public.
Ted Cruz, candidate for the Republican party’s presidential nomination, did this in Florida last year, writes O’Neill, showing web-based ads to a meeting of the Republican Jewish Coalition at a hotel in Las Vegas. The ads, which reiterated Cruz’s commitment to Israel and its defence, were visible only to those located inside the hotel. Weren’t the rest of the US public entitled to know what he was promising?
O’Neill paints an alarming picture of how social media news feeds could be manipulated to game the political system by playing with voters’ emotions on election days. Facebook, for example, has enormous power to affect what we learn, how we feel and whether we vote, she writes.
The vicious cycles created by algorithms extend to the US judicial system (in certain states, computer-generated scores drive longer sentences for those classified as at a higher risk of reoffending), the job market (anyone with a past mental health issue may be blackballed) and insurance (microtargeting means a move away from the principle of risk sharing, to the benefit of the company but not the consumer, who may face massive rises in premiums).
All this adds up to an inversion of the American national motto of “E Pluribus Unum”, (“Out of Many, One”), argues O’Neill.
“Weapons of math destruction reverse the equation,” she writes. “Working in darkness, they carve one into many, while hiding from us the harms they inflict upon our neighbours near and far.”
What’s the solution? Greater openness for data models, a moral code for programmers, a shift towards the European model of opt-ins for the reuse of personal data: all these may have some impact, the author suggests.
A more robust policy—making sure that social media companies open up their algorithms to third-party audits to detect potential biases—has so far been resisted by the web giants. Given the firms’ power, regulators have their hands full.
The internet, computers, social media and big data are not going away: they are an ever more integral part of our lives. But Cathy O’Neill makes a convincing case that algorithms have run amok in secrecy, and that the code with which they operate needs to be tamed.Read More