The majority of us use artificial intelligence every day — without even realizing it. Like when Google predicts your search phrase or you issue a command to Siri or you scroll through ads and articles on your Facebook feed.
And that, says AI technologist, Kriti Sharma, is dangerous.
“Despite the common public perception that algorithms aren’t biased like humans, in reality, they are learning racist and sexist behavior from existing data and the bias of their creators.
“AI is even reinforcing human stereotypes.”
This story is part of a series about the intersection of gender and language, “From ‘Mx.’ to ‘hen’: When ‘masculine’ and ‘feminine’ words aren’t enough,” a collaboration between Across Women’s Lives and the World in Words podcast. Read about the first episode: A British ‘Mx.’ tape.
London-based Sharma, who was recently named in the “Forbes 30 under 30 Europe” list, says voice assistants Siri, Alexa and Cortana exemplify the systemic sexism in AI.
“They have all been given obedient, servile, female personalities. They are turning our lights on and off, ordering our shopping. Whereas for more high-powered tasks, such as making business decisions, AI is often given a male personality — take IBM’s Watson or Salesforce’s Einstein.”
As AI becomes more ingrained in our daily lives, Sharma is determined to ensure technology is not building a prejudiced future.
“A child growing up in an AI-powered world today could be learning to bark orders at a female voice assistant — I think we all would agree this is dangerous.”
One of the biggest issues, she says, is that the workforce in technology fields, and particularly in AI, is male-dominated. In the US, only 5 percent of tech startups are owned by women; women account for just a quarter of information technology employees; and only 6.4 percent of executives at Fortune 500 companies are women.
“I strongly believe if we had more diverse technology teams, these issues would have been detected and acted upon much earlier — possibly they would never have even happened.”
Sharma cites an example at Boston University where researchers trained an AI program using text from Google News. When the AI was asked, “Man is to computer programmer as woman is to X,” the program’s response was “homemaker.”
There are also economic ramifications of biased AI. A 2015 study found online advertising on career websites appeared to show bias toward men by serving ads for high-paying jobs disproportionately to male audiences. An ad for highly paid executives appeared 1,816 times to men and just 311 times to women.
Related: How English-language pronouns are taught around the world
“The problem is, there isn’t a single worst offender,” Sharma says. “It’s endemic to the tech industry. The biggest challenge we’re facing is that we will end up creating more inequality in the fourth industrial revolution by perpetuating the bias that already exists, rather than use technology to solve issues of gender, race and age inequality.”
According to a survey released last year, 80 percent of enterprises are already investing in AI, with 1 in 3 business leaders believing their company will need to boost investment in the next several years to stay competitive. So, it’s becoming increasingly important to address the issues in AI — and sooner rather than later. Sharma says they “all need to think about the potential discriminatory impact of AI on society.”
Sharma created the world’s first personal chatbot for business finance at British company Sage, where she is vice president of AI, and hired the company’s first “conversation designer” — a role specifically for analyzing voice tones and personalities of AI assistants.
When Sharma first started developing AI at Sage, she proposed a gender-neutral personality for the company’s new assistant — named Pegg.
Related: The three-letter word that rocked a nation
“Pegg is proud of being a bot and does not pretend to be human,” she explains. “Initially, there was a lack of awareness within the company and outside world to stereotypes in AI but I found it very encouraging that I got a very welcoming response to my effort.”
Sage’s move to publish “The Ethics of Code: Developing AI for Business with Five Core Principles,” a set of design principles enabling other businesses to build ethical AI, was driven by Sharma, as well the business’s decision to build AI services to tackle social issues. Like a service in South Africa that will help the 1 in 3 women there who face domestic abuse and do not have access to advice and legal resources.
Another intention behind Sharma’s work is to make AI more transparent to the public.
“As consumers, we are not informed of why the machine recommended a service or product to us. In the last 18 months, due to major sociopolitical events, such as the influence of AI-powered voter targeting and the spread of misinformation in the recent US elections, this is becoming a more recognized issue. It has also been brought to the forefront of public conversations by the awareness of phenomena like fake news.”
Although Sharma says it’s disappointing there has been no industrywide policies or initiatives to confront the problem head-on, she’s continuing to call on CEOs and business boards to take action and develop inclusive, ethical AI.
Related: How do you talk about gender when the words ‘simply don’t exist’ in your language?
“By 2020, it’s been predicted we’ll spend more time talking to machines than our own families,” Sharma adds. “The good news is the design of AI — whether it’s gender-neutral, unbiased and nondiscriminatory — is entirely within our control.”
Correction: An earlier version of this story incorrectly stated that Kriti Sharma is working for the Obama Foundation.
Our coverage reaches millions each week, but only a small fraction of listeners contribute to sustain our program. We still need 224 more people to donate $100 or $10/monthly to unlock our $67,000 match. Will you help us get there today?