鶹Ƶ

Making an impact: 鶹Ƶ undergrad co-authors important machine learning study at Google

鶹Ƶ's Aidan Gomez co-authored research at Google that turned a single neural network loose on eight problems simultaneously (photo by Nina Haikara)

Aidan Gomez has yet to finish his undergraduate degree, but he’s already doing research at Google that could lead to dramatic improvements in machine learning – a thriving subfield of artificial intelligence.

Gomez, who took a semester off from his University of Toronto studies in computer science and math to work at Google Brain in Silicon Valley, recently on multitask learning by a single neural network with lead author and senior Google researcher Lukasz Kaiser.

While most existing machine learning techniques focus on a single task – identifying objects in images, interpreting natural language or recognizing audio – Kaiser and Gomez showed that a single neural network could successfully be turned loose on multiple tasks simultaneously.

The paper, titled "One Model to Learn them All," even showed evidence of overall improved performance.

“Lukasz and I basically stepped back and asked: Why shouldn’t one particular class of models be able to solve all these problems at the same time?” Gomez says. He compared the approach to the way humans carry cognitive tools acquired through previous experience.

 “We've shown that this network does precisely that – not only does it apply these tools, it makes performance on new tasks significantly better.”

Kaiser and Gomez trained their model to solve eight problems at the same time. That included the ImageNet classification contest that Geoffrey Hinton, a Google engineering fellow and 鶹Ƶ  Emeritus, and his graduate students, Alex Krizhevsky and Ilya Sutskever, won in 2012 with deep learning neural networks.

Sutskever is now leading artificial intelligence research at a $1 billion non-profit, OpenAI, backed by tech entrepreneur Elon Musk. 

Gomez calls the ImageNet task “very difficult” because it involves 1,000 categories and more than one million images. 

 “We were frightened, initially, that the model simply wouldn't have the capacity to learn ImageNet along with anything else – that it would dedicate all its resources to ImageNet, but it turns out not to be the case,” he says. “There appears to be symbiosis within the tasks, each feeding to the other and improving the overall performance considerably. 

 “These neural networks are in fact capable of learning an array of different tasks, at the same time, on the same parameters, similar to how you or I approach a new task.”

Gomez says their work primarily addresses "transfer learning," the term given to the re-application of learned knowledge to new tasks. Their model also solved simultaneous problems in language translation, imagine captioning, English audio transcription and grammar parsing – breaking sentences into their grammatical tree. 

“Even if the tasks are seemingly unrelated, like grammar parsing and image classification, it will get notably better performance in both, by training them together, as opposed to separately.”

Gomez expects this method could help improve performance where data is limited.

“Lack of data can be a devastating hurdle when training models,” he says. “And so with this work we demonstrate that a good source of more data is more tasks – the work suggests throwing more tasks into it, seemingly regardless of how closely related these tasks are, will make performance better.”

Gomez is one of more than 50 students taking part in the department of computer science’s undergraduate summer research program.

While the Google paper is of broad interest, Gomez says he and his research supervisor, Assistant Professor Roger Grosse, a co-founder of the Vector Institute for Artificial Intelligence, will later be releasing work that should also be of significance to machine learning researchers. 

Thanks to the pioneering work of Hinton and others, 鶹Ƶ has emerged as a global centre for research into artificial intelligence and deep learning in particular. Such technologies are expected to have a profound impact on a wide range of industries, improving everything from cancer detection to the way lawyers litigate cases.

Uber, for one, said earlier this year it was launching a research centre for self-driving cars in Toronto that will be headed by 鶹Ƶ Associate Professor Raquel Urtasun

“Undergrad is fantastic, but I’m ready to move on to my PhD,” says Gomez. “I really don’t think I would have pursued this to the extent that I have, without going to 鶹Ƶ.

"The inspiration that comes with going to a school that has been instrumental in defining my field has really propelled me forward.”  

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Computer Science