Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI can play chess – surely it can play fair too

Opinion: AI is having a moment. It seems as if every time I turn on a device it’s got more AI capability.
But it’s not all sunshine and roses. Just last week there were questions about the use of AI to write New Zealand Herald editorials.
It’s not surprising many of us are not comfortable with AI’s rapid ascent into every facet of our lives. In a poll last year, Internet NZ found 42 percent of New Zealanders who ‘know at least a little bit about AI’ said they were more concerned than excited by AI, that it can be used for malicious purposes, currently without regulation, and with unintended consequences. That AI produces inaccurate information and can violate our privacy also worries us.
Similar Australian research found that women were less trusting of generative AI and less likely to use these tools than men: 70 percent of male workers claimed to trust generative AI, but only 43 percent of female workers.
Such gendered distrust is well-founded.
A Berkeley study analysed 133 AI systems from different industries since 1988 and found 44 percent of them showed gender bias and 25 percent demonstrated both gender and racial bias. That is because the biases of those who built them and biases in the data it uses are baked into these systems, too often perpetuating gender equality gaps. If data contains gender stereotypes, the AI may generate biased content, reinforcing these stereotypes rather than challenging them. It may suggest a nurse is “she” and a doctor is “he”.
There are many examples of how such biases play out. Virtual assistants such as Siri or Alexa sometimes struggle to understand female voices because many of these systems are primarily trained on male voices, leading to subpar performance for women.
AI is widely used to screen job applications and even to conduct initial interviews. However, if an AI has been trained on data that includes historical biases (for example, men being favoured for tech roles), it may continue to favour male candidates.
While such gendered distrust may be well-founded, the AI gender gap has ramifications. So much so that it was the focus of a meeting at the United Nations last year and UN Women issued a weighty document about ensuring AI isn’t the new face of gender inequality.
Of concern is how new technologies can be used for gender-based violence, sexist hate speech, and gendered disinformation and misinformation. UN Women quote a 2020 survey of female journalists that found more than 70 percent had experienced online violence at work.
Female politicians and public figures such as Associate Professor Siouxsie Wiles are examples close to home.
Not engaging with AI has economic ramifications for women. Many future jobs will be AI-enabled and most employment will require some tech literacy. Women are still under-represented in the tech sector, which is not only a problem for women’s careers and income potential but a risk to all: a homogeneous group of developers are more likely to overlook certain biases simply because they are not exposed to them.
We all need to think hard about this and quickly – regulators, developers, employers, and consumers. AI has the potential to make vast improvements as well as the potential to exacerbate inequality.
Currently AI is playing favourites because we taught it too. But if we can teach AI to play chess surely we can teach it to play fair too.

en_USEnglish