At tech companies around the world, engineers and scientists are racing to develop the next AI product. But in that scramble, issues of equity and potential sociocultural harms are often an afterthought or ignored altogether.
According to the University of Waterloo’s Lai-Tze Fan, if you don’t ask key questions at the design phase — like “who does this benefit?”, “who’s not being represented?” and “will this harm anyone?” — biases can get baked into your product.
That’s why Fan established the U&AI Lab (where “U” refers both to users of AI and unseen elements of the technology). “The entire lab is asking questions about equal experience and design, such that end users have an experience of AI that is as fair as possible,” says Fan, who is Canada Research Chair in Technology and Social Change.
Her four students bring backgrounds in sociology, legal studies, English and systems design engineering. Fan herself combines an interdisciplinary PhD in communication and culture with previous degrees in literature, and experience developing apps as a postdoctoral fellow.
Together, they’re using high-powered computers, specialized cameras and other CFI-funded equipment to investigate responsible artificial intelligence and find ways to embed equity, diversity and inclusion into the design of AI technologies.
Revealing AI biases and other hidden harms
Take the example of facial recognition technology. According to Fan, it comes with a slew of biases and other issues. Cameras that aren’t optimized to photograph dark-skinned faces, making it harder for software to analyze the images. Algorithms trained with skewed datasets, creating software that’s more likely to misidentify certain groups of people. Image datasets — including images of children — scraped from social media without consent.
“Can you make these technologies racially diverse? Yes, you can,” Fan says. But given that training data can be taken without consent, she adds: “Can you do it ethically? That’s not something we’ve fixed yet. So that’s the project I’m currently exploring.”
Another area she’s focusing on is the gendering of language-based AI assistants like Siri and Alexa that schedule your appointments, provide reminders and order your groceries. Does giving these tools female names and voices reinforce stereotypes, she wonders?
She also muses about the kinds of behaviours AI assistants can encourage. Will swearing at Alexa for misunderstanding your command bleed into human interactions the next time a barista gets your coffee order wrong?
“It creates some weird socialization training, because we don’t have the same rules of common courtesy with an AI as with a human,” Fan explains.
Finally, her lab is researching the often-unseen environmental impacts of AI. Analyzing millions of datapoints in the blink of an eye requires massive computing power. And as AI technologies become more widespread, we’re going to need more and more energy-hungry server farms and advanced hardware to operate them.
More perspectives produce better technologies
Fan’s efforts aren’t limited to identifying problems. Ultimately, she aims to produce toolkits for designing equitable AI assistant software and resources to help developers create racially diverse face recognition.
She’d also love the opportunity to sit down with companies willing to embrace fairness, accountability, transparency and ethics in the design process. “I’m hopeful that we can work together,” Fan says. “Because I think that’s the only way to create any sort of change.”
The CFI-funded infrastructure has allowed much-needed critical and creative approaches to AI technologies, contributing sociocultural perspectives to AI issues such as racial bias and gender bias.
— Lai-Tze Fan, University of Waterloo
The research project featured in this story also benefits from funding from the Canada Research Chairs Program and the Social Sciences and Humanities Research Council.