Geoffrey Hinton and Fei-Fei Li draw thousands to talk about responsible AI development
After their research lit the fuse on artificial intelligence’s “Big Bang” more than a decade ago, AI luminaries Geoffrey Hinton and Fei-Fei Li are now hoping to solve a new problem: developing the revolutionary technology in a safe and responsible way.
A Emeritus at the University of Toronto who has been called the “godfather of AI,” Hinton has spent the past six months – let alone nearer-term risks such as joblessness, fake news and “battle robots.”
Li agrees that AI poses serious risks and the professor at Stanford University and co-director of the school’s Human-Centered AI Institute emphasizes the need to invest in public institutions to help guide the technology’s future. Still, she is hopeful about what lies ahead.
“If we do the right thing, we have a chance – we have a fighting chance of creating a future that's better,” said Li that was hosted by 鶹Ƶ at the MaRS Discovery District and livestreamed to thousands of people online.
“So, what I really feel is not delusional optimism at this point – it’s actually a sense of urgency of responsibility.”
Organized by Toronto venture capital firm Radical Ventures in partnership with 鶹Ƶ, Stanford, the and other organizations, the Hinton-Li talk was part AI history lesson, part call to action – and served to kick off the , a four-week program that’s designed to teach AI researchers how to build AI companies.
“It’s already clear that artificial intelligence and machine learning are driving innovation and value creation across the economy. They are also transforming research in fields such as drug discovery, medical diagnostics and the search for advanced materials,” 鶹Ƶ President Meric Gertler said during his introductory remarks. “Of course, at the same time, there are growing concerns about the role AI will play in shaping humanity’s future – so today’s conversation certainly addresses a timely and important topic.”
Li and Hinton recounted how, in 2012, Hinton’s grad students demonstrated the potential of deep learning neural networks on the ImageNet database built by Li and her team to test object recognition software. Discussion moderator Jordan Jacobs, a co-founder of Radical Ventures and the Vector Institute, referred to it as AI’s “Big Bang moment.”
While Hinton said he remains concerned about the capacity of today’s AI systems to devour oceans of data and share instantly their learnings with each other – a trait he says could one day yield superior intelligence – he noted his message of caution is getting through.
“I’m quite optimistic that people are listening,” he said.
The wide-ranging discussion prompted a flurry of questions from the in-person and online audience – from entrepreneurs eager to implement responsible AI development at their startups, to students who wondered about the technology’s impact on teaching and education.
Melanie Woodin, dean of 鶹Ƶ’s Faculty of Arts & Science, called the conversation both “profound” and “unparalleled” in her closing remarks.
At a watch party organized by 鶹Ƶ’s department of computer science, Steve Engels, a professor, teaching stream, said Hinton’s appeal for more research on mitigating AI risks resonated with students in the room.
“It's nice when they get to see some of the people who are working on the technology also call people to action in order to try to respond to it,” he said. “There isn't opposition between the people making the technology and the people who are trying to regulate it and protect us from it.”
Arielle Zhang, a third-year student majoring in machine intelligence in 鶹Ƶ’s Faculty of Applied Science & Engineering, left the talk feeling optimistic about the future and her role in it.
“The conversation was pretty inspiring,” she said, adding that it helped convince her to pursue a another degree in academia – a place where topics such as AI privacy and fairness can be more easily explored.
“Those are the issues the new generation is facing.”
With files from Adina Bresge