

From left to right: Rachel Coldicutt, David Leslie, Rumman Chowdhury, Noura Al Moubayed and Wendy Hall
Royal Society/Debbie Rowe
It’s day two of the Women and the future of science conference at the Royal Society in London, but I’m finding it increasingly hard to concentrate on the speakers because my AI transcription software – which is supposed to make my life easier – keeps insisting on mistyping someone’s name. For every mention of a Julie, it types out Julian. The irony isn’t lost on me: this is the session about artificial intelligence, and specifically about how women are being erased from the latest AI technologies.
This is much bigger than the now-familiar idea that AI algorithms carry the biases of the datasets they are trained on, including gender bias.
Instead, the focus of the conference session, chaired by computer scientist Wendy Hall, is seeking to address a more fundamental issue: the fact that new AI technologies, which will have a transformative effect on all of society, are being designed almost exclusively by men.
Technology has always been an overwhelmingly male sector. In the UK, only 25 per cent of those studying computer science are women. But in recent years – and while generative AI has blossomed – Silicon Valley has become increasingly hostile to women.
“In the past two years, there’s been a regress,” says David Leslie, who is in charge of ethics and responsible innovation research at the Alan Turing Institute. “The question of whether the Trump administration has caused intergenerational damage to women in the sciences is undisputable. We are living through a time of backwards thinking.”
Last year, US President Donald Trump issued an executive order targeting so-called woke AI, and recommended that the US National Institute of Standards and Technology revise its AI risk-management framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change”.
One panellist, Rumman Chowdhury, a data scientist and former US science envoy for artificial intelligence, was in charge of ethics and accountability at Twitter before Elon Musk took over and fired her team. She points out that the concept of woke AI was born from misogynistic attitudes within Silicon Valley before Trump’s order.
Asked by Hall to describe AI without women, several panellists argue that we are already there. “I am in the world of frontier AI, and that is the world of AI without women,” says Chowdhury. This is a sentiment echoed by Rachel Coldicutt, who researches the social impacts of new and emerging technologies. “If we think about what the world looks like without women in AI, I think that’s what we have at the moment. It’s not fantasy at all.”
It should go without saying: this matters. There is a long history of technologies being developed for men’s bodies and needs, from crash test dummies to office air conditioning, astronauts’ space suits and the vast majority of medical research. This is known as the gender data gap, and the impacts can range from annoying to life-threatening.
AI will impact everything from the jobs we do to the way we educate our children and the diseases we can treat. But currently only 2 per cent of venture capital funding goes to women, Chowdhury points out. Meanwhile, less than 1 per cent of healthcare research and innovation goes towards women’s health conditions. “We need to make tech work for 8 billion people, not eight billionaires,” says Coldicutt.
What’s to be done? With hundreds of years of biased data baked into current AI models, Coldicutt doesn’t believe it will be possible to correct them. “We need alternative models,” she says. This is also a chance to shift the focus of what those models do. “It’s about cultivating models… that prioritise care for people, for the planet.”
Chowdhury, who has co-founded a non-profit called Humane Intelligence, which helps companies to make AI systems more accountable and fair, thinks part of the problem is that many of the current AI developments are built around a false sense of urgency, with a focus on the existential risk AI poses to jobs or even to humanity. If the narrative is that your house is on fire, “you’re not like, ‘What happened to my mother’s jewellery?’,” she says. If people feel as if they have no time, they will drop anything that feels extraneous, including diversity, she says.
As for the next generation, we need to address the economic and political framework through which AI is developed if we are going to encourage young people to develop AI for the social good, says Leslie: “We need to start with the basics, start with transforming the incentives.”
Ultimately, we may also need to rethink our very definition of intelligence in the context of AI to include broader, more diverse ways of thinking. Much of the original thinking on AI, including how to define it, originated at an influential meeting in the 1950s at Dartmouth College in New Hampshire. “That definition of intelligence comes out of the Dartmouth conference,” says Hall. “Which, by the way, was all men.”
Topics:


