The Technological Frontiers of Social Justice

Daniel Shearer discusses how technology can discriminate and calls for more policy work to be done so this can be prevented in the future.

Once a preserve of science fiction, the idea of machines discriminating against us is now a reality. It may seem counterintuitive that machines could be capable of exhibiting traits such as racism and sexism – things we have come to think of as distinctly human flaws. Yet advances in machine learning have now reached a tipping point where algorithms have become so complex – and prevalent – that algorithm discrimination is not only possible, it is rife.

Decisions about our lives are constantly being automated. When you apply for a job, swipe on a dating app or use a web browser, the odds are that somewhere, an algorithm has made a decision about you. Often these decisions will be trivial. Sometimes, those decisions can have consequences that are more serious. In mid-2019 for example, the UK Home Office came under fire for using an algorithm that is thought to be racially profiling Visa applicants into the UK. Similarly, in late 2019, US financial regulators opened an investigation into Apple’s credit card application process. Men applying for the Apple Card were being granted credit limits 20 times larger than women with similar or better credit history.

So what exactly is going wrong? Although developers of algorithms ensure that fields such as “gender” or “race” don’t explicitly feature in the decision making process, that doesn’t necessarily mean they aren’t factors in it. In their development stage, algorithms require training on a data set; this training data must reflective of the real world. Nevertheless, herein lies the problem. The real world has its own ways of manifesting biases in ways we would sometimes not immediately anticipate. For example, a CV sifting algorithm was negatively scoring gaps in CVs – this might make intuitive sense, up until you realise that this meant biasing decisions against women in the case of maternity leave! This example is just one, but there are many more examples like this going unnoticed.

If algorithms are picking up biases when we cannot see them, perhaps our society is more structurally biased than we like to believe. As it stands, we are currently only seeing the tip of the iceberg of these cases. We can expect further cases to emerge over the coming years and for the topic of algorithm discrimination to be a policy battleground of the 2020s.

Do you like this post?