麻豆视频

Autonomous vehicles: 麻豆视频 researchers make advances with new algorithm

Photo of Raquel Urtasun and two other researchers
From left, 麻豆视频 researchers Wenjie Luo, Associate Professor Raquel Urtasun, and Bin Yang at Uber鈥檚 Advanced Technologies Group (ATG) Toronto (photo by Ryan Perez)

A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. These tasks are typically trained independently from one another, which could result in disasters should any one task fail.

Researchers at the University of Toronto鈥檚 department of computer science and Uber鈥檚 Advanced Technologies Group (ATG) in Toronto have developed an algorithm that jointly reasons about all these tasks 鈥  the first to bring them all together. Importantly, their solution takes as little as 30 milliseconds per frame.

鈥淲e try to optimize as a whole so we can correct mistakes between each of the tasks themselves,鈥 says Wenjie Luo, a PhD student in computer science. 鈥淲hen done jointly, uncertainty can be propagated and computation shared.鈥

Luo and Bin Yang, a PhD student in computer science, along with their graduate supervisor, Raquel Urtasun, an associate professor of computer science and head of Uber ATG Toronto, will present their paper, , at this week's Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, the premier annual computer vision event.

To start, Uber collected a large-scale dataset of several North American cities using roof-mounted Li-DAR scanners that emit laser beams to measure distances. The dataset includes more than a million frames, collected from 6,500 different scenes.

Urtasun says the output of the LiDAR is a point-cloud in three dimensional space that needs to be understood by an artificial intelligence (AI) system. This data is unstructured in nature, and is thus considerably different from structured data typically fed into AI systems, such as images.

鈥淚f the task is detecting objects, you can try to detect objects everywhere but there's too much free space, so a lot of computation is done for nothing. In bird's eye view, the objects we try to recognize sit on the ground and thus it's very efficient to reason about where things are,鈥 says Urtasun.

To deal with large amounts of unstructured data, PhD student Shenlong Wang and researchers from Uber ATG .

鈥淎 picture is a 2-D grid. A 3-D model is a bunch of 3-D meshes. But here, what we capture [with Li-DAR] is just a bunch of points, and they are scattered in that space, which for traditional AI is very difficult to deal with,鈥 says Wang (pictured left).

Urtasun explains there's a reason AI works really well on images. Images are rectangular objects, made up of tiny pixels, also rectangular, so the algorithms work well on analyzing grid-like structures. But the LiDAR data is without any regular structure, making it difficult for AI systems to learn.

Their results for processing scattered points directly is not limited to self-driving, but any domain where there is unstructured data, including chemistry and social networks.

Nine papers will be presented at CVPR from Urtasun鈥檚 lab. Mengye Ren, a PhD student in computer science, Andrei Pokrovsky, a staff software engineer at Uber ATG, Yang and Urtasun also sought faster computation and developed .

鈥淲e want the network to be as fast as possible so that it can detect and make decisions in real time, based on the current situation,鈥 says Ren. 鈥淔or example, humans look at certain regions we feel are important to perceive, so we apply this to self-driving.鈥

麻豆视频 PhD student of computer science Mengye Ren at Uber ATG Toronto (photo by Ryan Perez)

To increase the speed of the whole computation, says Ren, they鈥檝e devised a sparse computation based on what regions are important. As a result, their algorithm proved up to 10 times faster when compared to existing methods.

鈥淭he car sees everything, but it focuses most of its computation on what鈥檚 important, saving computation,鈥 says Urtasun.

So when there's a lot of cars [on the road], the computation doesn't become too sparse, so we don't miss any vehicles. But when it's sparse, it will adaptively change the computation,鈥 says Ren.

The researchers as it is widely useful for improving processing for small devices, including smartphones.

Urtasun says the overall impact of her group鈥檚 research has increased significantly when they鈥檝e seen their algorithms implemented in Uber鈥檚 self-driving fleet, rather than reside solely in academic papers.

鈥淲e鈥檙e trying to solve self-driving," says Urtasun, "which is one of the fundamental problems of this century.鈥  

Topics

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Computer Science