鶹Ƶ computer engineer leads network to optimize hardware for AI
As machine learning algorithms – such as those that enable Siri and Alexa to recognize voice commands – grow more sophisticated, so must the hardware required to run them. Andreas Moshovos, a professor of computer engineering in the Faculty of Applied Science & Engineering, is leading a national research network that aims to create the next generation of computing engines optimized for artificial intelligence.
The NSERC Strategic Partnership Network in Computer Hardware for Emerging Sensory Applications (COHESA) brings together researchers from academia and industry to develop hardware that can deliver faster speeds and better performance for machine learning applications, from image recognition to autonomous vehicles.
Since the 1970s, the number of components that can be crammed into integrated circuits has doubled roughly every two years, a phenomenon known as Moore’s Law. The associated increases in performance have helped drive the current explosion in artificial intelligence. But these components can be arranged in many ways, and computer chip architecture can have a big impact on processing speed.
“The processors that are in your laptop or smartphone are general-purpose devices,” says Moshovos. “They are designed to execute many different kinds of algorithms. They do this reasonably well, but they are not going to be the fastest or the best at any one of them.”
By contrast, some processors are optimized for particular tasks. Moshovos points to the graphics accelerator chips found in most computers and smartphones. These speed up the repetitive calculations involved in generating graphics by completing thousands of them in parallel. The result is smoother, more fluid videos and faster gaming.
“You can’t optimize your hardware for every application, because it’s going to be too expensive,” says Moshovos. “On the other hand, where it is applicable, it’s been demonstrated that specialization can offer speeds from 10 to 1,000 times faster than general-purpose processors.”
This increased speed could be especially advantageous for machine learning. Imagine a self-driving car with a chip optimized to process visual information from road signs – in this application, even a few extra milliseconds could result in better decisions and safer operation.
By analyzing the types of calculations carried out by machine learning algorithms, Moshovos and his collaborators are looking for ways to simplify the number of mathematical operations they require.
“For many of the multiplications done in these algorithms, one of the numbers is either zero, or low enough that for all practical purposes it can be treated as zero,” says Moshovos. “When you multiply by zero, you always get zero, so these operations do nothing useful. This behaviour is more pronounced if we look inside the numbers at the bit level where most of them are zero. We’re trying to see how we can take some of them out and do something else useful in their place.”
COHESA builds on a strong track record of machine learning and artificial intelligence expertise at the University of Toronto. Last year the university launched the , which includes researchers such as Emeritus Geoffrey Hinton, known for pioneering the artificial intelligence technique called deep learning. Professor Brendan Frey of the Faculty of Applied Science & Engineering, who studied with Hinton, is leading research spinoff Deep Genomics, a company that aims to revolutionize medicine by combining expertise in machine learning and genomic science.
The NSERC COHESA network brings together researchers from a number of Canadian universities, including Yoshua Bengio at Université de Montréal and Raquel Urtasun at 鶹Ƶ. It also includes several industrial partners, from chip manufacturers AMD and Intel to large technology firms such as AMD, Google, Huawei, Intel, Microsoft and Qualcomm.
NSERC COHESA held its first annual general meeting at the University of Toronto on July 5-6, where more than 150 researchers across academia and industry heard a keynote talk by Bengio and participated in sessions on the network’s three major themes: intelligent sensing, hardware and system software.
“We’re focusing mostly on fundamental techniques that will improve the hardware, but the hardware itself isn’t the final goal, it’s a means to an end,” says Moshovos. “The fact that we also have some of the world leaders in machine learning is very unique. It will be very exciting to see what kinds of applications they come up with.”