The future of ethical AI
27 April 2021 at 8:31 am
Pro Bono News sits down with artificial intelligence researcher Thao Phan to discuss the issues of ethics of race and gender in AI
Artificial intelligence (AI) is everywhere. It’s unlocking iPhones with a simple face scan, it’s driving dating apps, identifying people in a crowded room, turning on lights and for some people, it’s even driving their car. Convenient? Yes. Always ethical? Well, that’s up for debate.
The ethics of AI, and its continued place in our society, is an ongoing global conversation, and one a panel at Melbourne Knowledge Week is set to have with its More Human Than Human panel.
Thao Phan, one of six panelists offering their perspective on AI’s future and its ethical implications, will be talking about AI from the perspective of gender and race.
Phan recently finished her Ph.D. where she looked into the gendered history of AI, key moments in the development of AI’s history, and how it evolved in very gendered cultures.
She told Pro Bono News the birthplace of AI was considered to be a 1956 workshop attended by six men in Dartmouth in the US.
“It was a closed meeting, poorly recorded and with no women in attendance,” she explained. “The collaborations these men put together were very narrow ideas around what AI can be. So there was a lot of focus on mathematics, map reading and chess – very gendered ideas of intelligence.”
With the majority of funding into AI research originally coming through government grants and university sectors it meant that research tended to be focused more on the philosophical viewpoint. Whereas these days, AI research is almost exclusively commercially funded and so, therefore, outcomes-driven.
All of which play a part in encouraging diversity.
“The gender side of AI research is interesting because, historically, universities are very gendered institutions, particularly the engineering, mathematics and science departments,” Phan said.
“So, ironically, even though we think of Silicon Valley as being a closed space, filled with a particular kind of person, the work being done there is in a more diverse environment than the traditional university.”
Phan said that the research component of AI is starting to shift.
“As businesses start to acknowledge the cultural and social importance of AI you get more forms of expertise into what counts as AI research. That means more humanities-based researchers or social scientists, which have historically been women.”
The problem with AI
When Phan discusses the issue with AI she’s quick to point out that the issue isn’t with the AI itself, instead, the problem lies with the people who are in charge of it, the people who are selling it, and their collective agenda.
“The agency doesn’t lie with the AI, instead it sits with the Googles, Microsofts, Amazons, and Apples of the world,” she said.
“Take something like Robodebt. The issue isn’t the program or the algorithm per se but a government that has a punitive approach to the poor.
“You know, it wouldn’t have mattered how much you tweaked that system. It was always designed to undermine the rights and humiliate the working poor. If that system had been designed by an all-female, all person of colour team, it would’ve still done the same job.”
Phan said that one thing she thinks about is the fact that AI programmers and designers don’t have as much power as we think they do.
“They’re just doing their job. In some ways, they’re just another bureaucrat fulfilling the needs of whatever’s being asked of them. It’s a bit more complicated than just trying to get more diversity and more inclusion. Diversity and inclusion into what?” she said.
Who’s the servant and who’s the master?
Phan’s ethical studies into AI concentrate on both gender and race and reference the work being done by Joy Buolamwini at MIT as well as Timnit Gebru, previously the co-lead of Google’s ethical AI team.
“These women have pioneered research in facial recognition and found it to be most accurate when it came to identifying white men’s faces,” she said.
“Followed by white women, then non-white men, and then least accurate for non-white women. Buolamwini and Gebru tested this across a number of systems from Microsoft to IBM. Since that research, a lot of US states have moved to completely ban facial recognition.”
While ethics in AI is a huge topic being played out across governments, industries, and large organisations all over the world it’s also something for anyone with a voice-activated device to consider.
Aside from questions of race and gender, there are also class implications.
“If you look at devices like Siri, Amazon Alexa, and Google Home they are all figured on this idea of servitude, which comes with gendered expectations. Historically it would be middle and upper class who would have servants as a sign of privilege,” Phan said.
“These companies are capitalising on this image of an assistant, which is quite misleading because in a historical setting the assistant works for the master. Yet when it comes to a company like Amazon, as soon as Alexa enters your home you’re the one that enters the data, you’re the one that feeds it stuff, you’re the one who inputs all this information so Amazon can make a profile of you that they can then capitalise upon. So, really, who’s the servant and who’s the master?”
The solution
When asked what she thinks the solution to making AI more ethical and inclusive might be, Phan points out that it’s complicated.
“The whole system is so difficult to understand because you need some specialist knowledge of the problem. Not just technical knowledge of what’s going on but also a social-political knowledge as well, and so few people have that,” she said.
“My most optimistic answer is that it starts with a public agenda. It starts with everyday people taking an interest and putting it on the agenda. Then we can support, and fund, more precise and more complex agendas to put accountability on the table for our lawmakers and for our policymakers.”
More Human Than Human is running on Thursday 29 April. In-person and online tickets can be found here.