鶹Ƶ

'Built for this moment': 鶹Ƶ researcher helps develop ethics of AI handbook

Photo of Markus Dubber
Markus Dubber, the head of 鶹Ƶ's Centre for Ethics, is co-editing an Oxford Handbook on the Ethics of AI and will be holding a two-day workshop this week on the handbook's progress (photo courtesy of 鶹Ƶ Faculty of Law)

The University of Toronto’s prowess in artificial intelligence research is widely recognized, attracting a who’s who of technology companies to Canada’s largest city. Less well known, however, is the work being done by people like Markus Dubber to ensure the potentially transformative technology will be developed responsibly.

The head of 鶹Ƶ’s Centre for Ethics has spent the past two years facilitating an interdisciplinary conversation on AI and ethics by bringing together computer scientists and philosophers, engineers and doctors from across the university and beyond.

Now, he’s also co-editing a “soup to nuts” Oxford Handbook on the Ethics of AI with Sunit Das, an associate professor in 鶹Ƶ’s Faculty of Medicine, and Frank Pasquale of the University of Maryland.

“I think the issue of figuring out how to make and use AI ethically is one of the central normative challenges of our time,” says Dubber, who is also a professor at 鶹Ƶ’s Faculty of Law. “It’s also one of the most exciting opportunities for meaningful interdisciplinary and international exchange among academics and for engagement with broader publics.

“Public universities were built for this moment.”

Beginning tomorrow, the centre will host a two-day workshop on the progress of the handbook, which is due out later this year. It will feature talks from contributors representing a number of institutions – from the Massachusetts Institute of Technology and Cornell University to the University of Connecticut.

鶹Ƶ News caught up with Dubber this week to find out more about the project and its importance for AI development.


What is the Oxford Handbook of Ethics of AI?

The Oxford Handbook of Ethics of AI tries to frame the academic and public conversation about the ethical dimensions of AI in its various shapes and sizes. It reflects our conviction that generating and maintaining an ethics of AI will require a broadly international and interdisciplinary, all-hands-on-deck, effort. It grew out of the Ethics of AI in Context initiative at the Centre for Ethics here at 鶹Ƶ, which has been about creating a forum for a truly open interdisciplinary exchange, based neither in computer science or engineering, on one side, or on the humanities or social sciences, on the other. It started with a workshop series and a graduate seminar.

How did you become involved?

I’m a big fan of handbooks – they’re a great way to capture and also to shape a field through the choice of topics and contributors, and even by the decision to assemble a handbook for a particular subject in the first place. Since I edited a bunch of handbooks for Oxford University Press on other subjects over the years, it struck me that putting together a handbook of ethics of AI might be a distinctive and perhaps even useful contribution the Centre for Ethics could make to the emerging academic and public conversation about the ethics of AI.

The handbook, which I’m co-editing with my fellow Ethics of AI Lab member Sunit Das and Frank Pasquale of the University of Maryland, will come out in late 2019, knock on wood, and will include 50 or so chapters written by a diverse interdisciplinary and international lineup of authors, covering ethics of AI from soup to nuts – from introductory overviews of the project of ethics of AI as a whole, key concepts and issues like bias, autonomy, transparency, as well as various perspectives and approaches to several applications like law, medicine, autonomous vehicles and so forth.

How big an issue is ethics in AI – at 鶹Ƶ and more broadly?

This may well be the Kool-Aid talking, but I think the issue of figuring out how to make and use AI ethically is one of the central normative challenges of our time. It’s also one of the most exciting opportunities for meaningful interdisciplinary and international exchange among academics and for engagement with broader publics. Public universities were built for this moment.

鶹Ƶ, and the Centre for Ethics, are an obvious place to take the lead on this issue. 鶹Ƶ is closely associated with – and invested in – AI research and has a well-deserved reputation for strength across a wide range of disciplines. At the same time, the Centre for Ethics has a uniquely interdisciplinary, university-wide, and public-facing mission to explore the ethical dimensions of individual, social and political life. We think the university has a terrific opportunity to make a signal contribution to a vexing academic and public issue of local, domestic and global significance. We hope 鶹Ƶ will seize that opportunity.

What will be the focus of the upcoming workshop?

The workshop captures a snapshot of the evolution of the handbook. It brings together an interdisciplinary group of contributors to the handbook, drawn from each part of the volume, who’ll share their work-in-progress on their chapter. Disciplines represented include computer science, economics, engineering, English, industrial and labour relations, law, philosophy and urban studies and planning. Speakers’ affiliations include Cornell, MIT, Northeastern University, Rutgers University-Camden, University of Connecticut, and University of Virginia, as well as the University of Ottawa and the University of Toronto – the department of English’s Avery Slater, who is another Ethics of AI Lab member – and the enterprise software-maker SAP. Topics include: “Ethics of AI in Context: Society,” “Private Sector AI: Ethics and Incentives,” “The Rights of Artificial Intelligences,” as well as perspectives on ethics of AI from economics, engineering, the humanities, and political economy, and the ethical dimensions of applications of AI in public law and policy, smart cities and the future of work.

How does the handbook fit in with what you’re trying to do at 鶹Ƶ’s Centre for Ethics?

The handbook plays a key role in our attempt to build an Ethics of AI Lab at the Centre for Ethics. They’re both about launching and sustaining a from-the-ground-up interdisciplinary and international dialogue about the ethical dimensions of AI as it spreads through all aspects of private, public, and political life, here, there, and everywhere. The handbook is meant as a canon-defining work that shapes the broad and diverse conversation about ethics of AI we all need to have, and to have now, within and beyond the university. 鶹Ƶ’s Ethics of AI Lab has started to make that conversation happen, and will continue to nourish and expand it in years to come.

UTC