麻豆视频

麻豆视频鈥檚 Citizen Lab, international human rights program explore dangers of using AI in Canada鈥檚 immigration system

Photo of Petra Molnar
Researcher and co-author of a report on AI uses in Canadian immigration, Petra Molnar, in the international human rights program office (photo by Romi Levine)

Canada is fast becoming a leader in artificial intelligence, with innovators across the country finding new ways of using automation for everything from cancer detection to self-driving cars.

According to a joint report by the international human rights program (IHRP) in the Faculty of Law and Citizen Lab, based in the Munk School of Global Affairs & Public Policy, the Canadian government is beginning to embrace automation too, but if used irresponsibly, it can trample on human rights.

The report looks at the ways the Canadian government is considering using automated decision-making in the immigration and refugee system, and the dangers of using AI as a solution for rooting out inefficiencies.

鈥淭he idea with this project is to get ahead of some of these issues and present ideas and policy recommendations and best practices in terms of, if you're going to be using these technologies, how they need to accord to basic human rights principles so they do good and not harm,鈥 says Petra Molnar, one of the authors of the report and a technology and human rights researcher at IHRP.

Molnar, along with co-author Lex Gill, who was a Citizen Lab research fellow at the time, found that the Canadian government is already developing automated systems to screen immigrant and visitor applications, particularly those that are considered high risk or fraudulent.

鈥淏ut a lot of this is being talked about without definitions,鈥 says Molnar. 鈥淪o what does high risk mean? We can all imagine which groups of travellers would be caught up under that. Or fraudulent 鈥 how are they going to determine whether a marriage is fraudulent or not or if this child is really your child? There are no parameters.鈥 


The report includes a taxonomy of immigration decisions. "We take the reader through what it would look like if you are applying to enter Canada 鈥 all the different considerations you have to think about," says Molnar. "Each section is broken down by applications and questions of how these technologies might actually be impinging on human rights." (Illustration by Jenny Kim/ Ryookyung Kim Design)

Finding concrete information about government practices has proven to be tough. Molnar says the research team filed 27 access-to-information requests but were still awaiting response as of writing the report.

The problem at the core of automation, says Molnar, is that algorithms are not truly neutral.

鈥淭hey take on the biases and characteristics of the person who inputs the data and where the algorithm learns from,鈥 she says. 鈥淭he worry is it's going to replicate the biases and discriminatory ways of thinking the system is already rife with.鈥

The authors also looked at case studies from around the world of governments using AI for immigration-related decision-making.

鈥淣o one has done a human rights analysis of these technologies, which to me is kind of bonkers,鈥 says Molnar. 鈥淗ow are these technologies actually going to impact people鈥檚 daily reality? That's where we come in.鈥

The report highlights international cases of algorithms failing to protect the rights of the people affected by immigration decisions. This included the U.S. Immigration and Customs Enforcement (ICE) setting an algorithm to justify 100 per cent detention of migrants at the border, and the U.K. government wrongfully deporting over 7,000 students who they claimed cheated on English language equivalency tests that were administered using voice recognition software. The automated voice analysis was proven to be incorrect in many of the cases when compared to human analysis.

鈥淭here are all these ways the algorithmic decision-making tool can be just as faulty but we view them with perfection so without realizing, we risk deploying them irresponsibly and ending up where possibly we were better off with human decision-makers,鈥 says Cynthia Khoo, Google policy fellow at Citizen Lab and one of the reviewers of the report.


(Illustration by Jenny Kim/ Ryookyung Kim Design)

The report offers a list of recommendations the authors hope will be adopted by the Canadian government, including the establishment of an oversight body to monitor algorithmic decision-making and informing the public about what AI technology will be used.

The research team hopes to update the report once the access-to-information documents are received and to continue its work on automation by looking at other uses of AI by the federal government, including in the criminal justice system, says Molnar.

For both IHRP and Citizen Lab, the nature of this report is unusual 鈥 focusing on potential harm and not existing violations of human rights, says Samer Muscati, director of IHRP.

鈥淭his gives us great opportunities to have impact right from the start before these systems are finalized,鈥 he says. 鈥淥nce they're in place, it's much harder to change a system than when it's actually designed.鈥

Ahead of the report鈥檚 publication, IHRP and Citizen Lab met with government officials in Ottawa to present the report.

Muscati hopes beyond these meetings, the report can make a real difference.

鈥淚t's when we see practices and policies being changed 鈥 that's when we know we're having some impact."

 

UTC