麻豆视频

'Making uncertainty visible': 麻豆视频 researcher says AI could help avoid improper denial of refugee claims

""
Avi Goldfarb is a professor at the University of Toronto's Rotman School of Management and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society (photo courtesy of the Rotman School of Management)

Avi Goldfarb is an economist and data scientist specializing in marketing. So how is it that he came to publish a paper on reducing false denials of refugee claims through artificial intelligence?

Goldfarb, a professor at the Rotman School of Management at the University of Toronto and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society, read Refugee Law's Fact-Finding Crisis: Truth, Risk, and the Wrong Mistake, a 2019 book by Hilary Evans Cameron, a 麻豆视频 alumna and assistant professor at the Ryerson University Faculty of Law.

He found some remarkable overlaps with his own work, particularly the methodology he employs in his 2018 book, Prediction Machines: The Simple Economics of Artificial Intelligence.

It just so happened that Evans Cameron had read Goldfarb鈥檚 book, too.

鈥淚t turned out we effectively had the same classic decision theoretic framework,鈥 says Goldfarb, 鈥渁lthough hers applied to refugee law and problems with fact-finding in the Canadian refugee system, and mine applied to implementing AI in business.鈥

Decision theory is a methodology often used in economics and some corners of philosophy 鈥 in particular, the branch of philosophy known as formal epistemology. Its concern is figuring out how and why an 鈥渁gent鈥 (usually a person) evaluates and makes certain choices.

The main idea around which Evans Cameron鈥檚 and Goldfarb鈥檚 thoughts coalesced was this: Human decision-makers who approve or deny refugee claims are, as Goldfarb noted in his research presentation at the Schwartz Reisman weekly seminar on Oct. 7, 鈥渙ften unjustifiably certain in their beliefs.鈥

In other words: people who make decisions about claimants seeking refugee status are more confident about the accuracy of their decisions than they should be.

Why? Because 鈥渞efugee claims are inherently uncertain,鈥 says Goldfarb. 鈥淚f you鈥檙e a decision-maker in a refugee case, you have no real way of knowing whether your decision was the right one.鈥

If a refugee claim is denied and the refugee is sent back to their home country where they may face persecution, there is often no monitoring or recording of that information.

Goldfarb was particularly struck by the opening lines of Evans Cameron鈥檚 book: 鈥淲hich mistake is worse?鈥 That is, denying a legitimate refugee claim or approving an unjustified one?

In Goldfarb鈥檚 view, the answer is clear: sending a legitimate refugee back to their home country is a much greater harm than granting refugee status to someone who may not be eligible for it. This is what Goldfarb refers to as 鈥渢he wrong mistake.鈥

So, from Goldfarb鈥檚 perspective as an economist and data scientist with specialization in machine learning (ML), a type of artificial intelligence, he started to wonder: Could ML鈥檚 well-known ability to reduce uncertainty help reduce incidences of 鈥渢he wrong mistake鈥?

Goldfarb鈥檚 collaboration with and Evans Cameron reflects the intersections between the four 鈥渃onversations鈥 that guide the Schwartz Reisman Institute鈥檚 mission and vision. Their work asks not only how information is generated, but also who it benefits, and to what extent it aligns 鈥 or fails to align 鈥 with human norms and values.

鈥淢L has the ability to make uncertainty visible,鈥 says Goldfarb. 鈥淗uman refugee claim adjudicators may think they know the right answer, but if you can communicate the level of uncertainty [to them], it might reduce their overconfidence.鈥

Refugee law expert Hilary Evans Cameron is a 麻豆视频 alumna and an assistant professor at Ryerson University鈥檚 Faculty of Law (photo courtesy of Ryerson University)

Goldfarb is careful to note that shedding light on 鈥渢he wrong mistake鈥 is only part of the battle. 鈥淯sing AI to reduce confidence would only work in the way described if accompanied by the changes to the law and legal reasoning that Evans Cameron recommends,鈥 he says.

鈥淲hen uncertainty is large, that does not excuse you from being callous or making a decision at all. Uncertainty should help you make a better-informed decision by helping you recognize that all sorts of things could happen as a result.鈥

So, what can AI do to help people realize the vast and varied consequences of their decisions, reducing their overconfidence and helping them make better decisions?

鈥淎I prediction technology already provides decision support in all sorts of applications, from health to entertainment,鈥 says Goldfarb. But he鈥檚 careful to outline AI鈥檚 limitations: It lacks transparency and can introduce and perpetuate bias, among other things.

Goldfarb and Evans Cameron advocate for AI to play an assistive role 鈥 one in which the necessary statistical predictions of evaluating refugee claim decisions could be improved.

鈥淔undamentally, this is a point about data science and stats. Yes, we鈥檙e talking about AI, but really the point is that statistical prediction tools can give us the ability to recognize uncertainty, reduce human overconfidence and increase protection of vulnerable populations.鈥

So, how would AI work in this context? Goldfarb is careful to specify that this doesn鈥檛 mean an individual decision-maker would immediately be informed whether they made a poor decision, and given the chance to reverse it. That level of precision and individual-level insight is not possible, he says. So, while we may not solve 鈥渢he wrong mistake鈥 overnight, he says AI could at least help us understand what shortfalls and data gaps we鈥檙e working with.

There are many challenges to implementing the researchers鈥 ideas. It would involve designing an effective user interface, changing legal infrastructure to conform with the information these new tools produce, ensuring accurate data-gathering and processing and firing up the political mechanisms necessary for incorporating these processes into existing refugee claim assessment frameworks.

While we may be far from implementing AI to reduce incidences of 鈥渢he wrong mistake鈥 in refugee claim decisions, Goldfarb highlights the interdisciplinary collaboration with Evans Cameron as a promising start to exploring what the future could bring.

鈥淚t was really a fun process to work with someone in another field,鈥 he says. 鈥淭hat鈥檚 something the Schwartz Reisman Institute is really working hard to facilitate between academic disciplines, and which will be crucial for solving the kinds of complex and tough problems we face in today鈥檚 world.鈥

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Schwartz Reisman