With artificial intelligence (AI) being an inevitable part of the future, Telus reports Canadians want to ensure ethics go along with it.
The company’s inaugural report on the future of AI states decision makers can solve many of the known risks and challenges through collaboration and dialogue. However, dialogue, especially discussions with the community, has been lacking when it comes to the development of AI.
The report’s figures are based on responses from 4,909 Canadians with different backgrounds, communities, and perspectives on AI. The respondents are part of the Angus Reid Forum, and the company collected additional context through interviews, focus groups, and discussion events. Responses were collected between October 20th and 27th, 2023.
Oversight and regulation
The results show human oversight is essential. More than 90 percent of Canadians strongly agree that ethical principles must be part of the development of AI. Nearly half of the respondents think community consultation should be part of AI governance to ensure diverse perspectives are included.
A further 78 percent believe Canada should regulate AI, with a majority stating the government should take the lead. The study shows that 2 in 3 Canadians believe professionals in data ethics, law and academia should be part of the conversations.
Canada doesn’t currently have regulations on AI systems. The federal government did, however, launch a voluntary code of conduct in September. The code is partially based on a consultation process from August.
Ongoing concerns
Canadians believe AI is acceptable in some industries, but not all. Research and online shopping, for example, were seen as acceptable fields to use the technology. But, banking and social media received less support. “There is a concern about the governance of data and the output of AI, including how it will be used by social media platforms,” the study states.
Another concern surrounding AI is discrimination, with 1 in 5 respondents stating they’ve been personally discriminated by AI technology, including through misrepresentation and stereotyping.
Some of the underrepresented groups included in Telus’s study are those who identify as physically disabled, members of racialized groups, and Indigenous Peoples.
Of the respondents who identify as physically disabled, 15 percent say their disability made them unable to use an AI-powered device. Some common issues this group raised include inaccurate predictive texts and AI’s inability to recognize speech patterns.
“While trying to interact with an AI phone service, I get disconnected when I can’t answer fast enough because of my disability,” one respondent said. “I have to get a friend or family member to assist me.”
Of the respondents who identify as being part of a racialized group, 42 percent said AI is biased against them and their peers. There’s also a strong belief that AI could be used for mass surveillance.
For respondents who identified as Indigenous, 35 percent felt that AI is biased against them. There was a strong feeling of mistrust and a belief this technology could be used for colonialism, making current inequities worse. There’s also a consensus that Elders and Knowledge Keepers should be part of the consultation process to ensure Indigenous perspectives are part of the AI ecosystem.
Telus lists out several recommendations to solve the issues raised, including addressing educational gaps that allow diverse communities to better understand and participate in how AI is developed and used.
“It’s essential that AI doesn’t become a tool for amplifying historical inequities or further polarizing diverse groups,” Telus said in its conclusion.
Image credit: Shutterstock
The full report is available on Telus’ website.
MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.