AI algorithms are becoming a greater part of our daily professional and personal lives; therefore, tech companies and search engines have a responsibility to make sure the results are accurate in reflecting different types of people in a non-biased way. Artificial intelligence has recently been criticized for amplifying sexist and racist biases from the real world – these algorithms discriminate and go beyond the line, so we need to be able to hold those who create them to account. If AI is going to take an ever-more important role in society, we need to find a way to trust it.
Google ‘unprofessional hairstyles’
I recently Googled ‘unprofessional hairstyles’ and was shocked by the results of how Google’s top searches were pictures of black women with natural hair. Then, when I searched ‘professional hairstyles’, the results were of white women. The pressure for black women in particular to straighten their hair in the working world is immense and Google seems to enforce that pressure even more. The “Good Hair Study” found that black women’s natural hair was rated as “less attractive” and “less professional” in the workplace than when it is straightened. These Western-centric views are imposed not just in the real-working world but on the internet too, which raises questions about the role of algorithms in how we use the web and how we trust algorithmic judgment without even questioning it.
It’s the same when you search for the word “man”, “woman”, “relationships” or “marriage” – you almost exclusively get images of white men and women. The world-web doesn’t seem to reflect and consider that most of the global population is non-white. Our algorithms seem to consider the white race and white beauty standards as superior and they dominate our search results.
Computer program says black people more likely to reoffend
A computer program used by a US court for risk assessment was biased against black prisoners. The program, Compas, was wrongly flagging black defendants as likely to reoffend at almost twice the rate of white people (45% to 24%). Compas and other programs were being used in hundreds of courts across the US, possibly informing the decisions of judges and other officials, telling them that black men were more likely to reoffend. The US justice system had turned to technology for help, only to notice that the algorithms had a racial bias too.
CEO roles are for men
A 2016 study found that Google’s online advertising system showed high-income jobs to men much more often than women. Many questioned whether Google’s algorithms could have decided that men are more suited to executive positions on its own, having learned from the behaviour of its users: if the only people seeing and therefore clicking on adverts for high-paying jobs are men, the algorithm will learn to show those adverts only to men, thus perpetuating it’s conviction that only men are interested – a sort of AI confirmation bias.
Uber accepts Adam and rejects Darnell
Arvind Narayanan, a Computer Scientist from Princeton University, discovered that female names were more closely associated with being nurturing, cooking, staying home and skillsets applied to the arts, while male names were linked to being leaders and career driven, and their strong skillsets were placed in complex subjects such as mathematics and engineering. But it wasn’t just sexism: European names (Adam, William, Greg) ranked as more “pleasant” than traditionally African or Middle-Eastern names (Darnell, Yolanda, Mohammed), which were ranked us “unpleasant”. Narayanan’s study was backed up by another study, published by The National Bureau of Economic Research, which found that Uber drivers are twice as likely to cancel on black customers. Black customers wait “significantly longer” for their Ubers and experience double the cancellation rates of white passenger . Over 581 trips, the researchers found that the four black riders waited longer to have their trip requests accepted by a driver. Once accepted, black riders on Uber also waited 30% longer to be picked up. The researchers thought this indicated discrimination because the estimated wait times provided to the riders by Uber were all similar.
So what do we do with this knowledge, when the speed and need of AI and algorithms is moving at such a fast pace?
We can’t ignore the very real financial and commercial benefits that companies can derive from using algorithms. Advertising space is as expensive as ever, and if companies can use such systems to target their adverts only at those who will be interested, then market forces will ensure their continued use and it’s hard to argue with cold, hard cash.
But that said, everyone, especially the companies that rely on them and those that write the programs, needs to be aware of their limitations, and specifically of the fact that algorithms can and often do get it wrong. Tech companies and businesses need to take steps to remove the biased data, organization’s need to take responsibility and accept that prejudice is well-represented in all of their software. If organisation’s don’t get rid of it, we will be relying on biased algorithms that create a feedback loop in which decisions are made that create more biased data, that algorithms will then analyse and use in the future, perpetuating and furthering the bias.
We also have to consider the impact that biased algorithms will have on us, as members of society exposed to them – how might they feed into and help fuel our own pre-existing biases? Consciously or unconsciously, are people being told that success looks like a white person with ‘salon-ready hair’, while a lack of success comes in the form of a black person with an afro? Or that only men can reach the top of the professional ladder, while women need to set their sights on lower rungs? What messages are we and our children absorbing when Google and other companies are reinforcing – albeit inadvertently – the biases and prejudices so many of us still hold, and which we are trying as a society to move away from?