鶹Ƶ

This 鶹Ƶ alum is leading AI research at $1 billion non-profit backed by Elon Musk

Ilya Sutskever on leaving Google, working with Geoffrey Hinton, and more
Ilya Sutskever
鶹Ƶ alumnus Ilya Sutskever is now the research director for OpenAI

Computer science and mathematics graduate Ilya Sutskever no longer remembers the name of the California restaurant where the idea of a non-profit artificial intelligence research company first emerged.

But a dinner conversation between Sutskever, billionaire tech entrepreneur Elon Musk, Sam Altman, president of Y Combinator, the largest U.S. seed fund, and Greg Brockman, the former chief technology officer of Stripe, among others, put forward a plan for . 

Just five years ago, Sutskever, fellow PhD student Alex Krizhevsky and world-renowned computer science professor Geoffrey Hinton had developed an image recognition algorithm that dramatically outperformed all existing algorithms, by training a very large neural network on a dataset of 1 million images. The trio founded DNNresearch Inc., named after the deep neural networks – or deep learning – used by their system and within a few months, their startup was sold to Google Inc.

Both Krizhevsky and Sutskever joined Google as research scientists. Hinton also joined the company and is now a vice-president engineering fellow at Google – as well as a  Emeritus at the University of Toronto.

Today Sutskever is the research director for OpenAI. He talked with 鶹Ƶ's Nina Haikara about his time at the university and Google with Hinton and Krizhevsky, today's artificial intelligence renaissance – and that first dinner conversation at a restaurant located somewhere along Sand Hill Road in Menlo Park where many venture capital firms reside.

“It was an extremely difficult decision,” says Sutskever. “I was very happy at the Google Brain team. It was moving forward on an extremely strong trajectory – I would be taking no risk by staying and have much to gain.”


What peaked your interest in artificial intelligence? 

I learned computer programming at a relatively young age, and of all things that a computer could do, artificial intelligence seemed like the most exciting thing by far. I was surprised when I found out that computers can learn to some extent because in my mind, learning was not something computers could do at the time.

What led you to study with Hinton? 

I just finished my second year of undergraduate math and went to Diane Horton’s office, the associate chair, undergraduate studies, at the time, to see if there were any research opportunities that involved machine learning. I was very interested in the concept of machine learning and was hoping to learn more about it. Diane suggested that I speak to Geoff Hinton, which I did.

Geoff gave me a few papers to read, which I managed to do, after which we began working on a research project. This was in 2003.

My first project was to improve the SNE (Stochastic Neighbor Embedding) algorithm. A paper did eventually come out of this work, a few years later.

How has working with Hinton affected your career? 

It was absolutely critical in very many respects. Thanks to working with Geoff, I had the opportunity to work on some of the most important scientific problems of our time and pursue ideas that were both highly unappreciated by most scientists, yet turned out to be utterly correct. These ideas gave rise to what is now known as deep learning. As a result, I was able to contribute to and develop some of the most important scientific advances in machine learning in recent history. 

What were the ‘a-ha’ moments?

Working with Geoff, I gained a very good, very solid understanding of what learning is, and of how learning should be approached, how to think about it, and its limitations and capabilities. 

An important realization that I had before most people was that if you have a big and deep neural network that you train on a lot of data, you can literally solve any pattern recognition problem.

And that turned out to be true. It was a powerful realization because there were many important pattern recognition problems that were unsolved, and the methodology of a big deep neural network could really solve these problems, or at least, improve them substantially. The truth of this claim is the driving force behind the very AI renaissance we’re experiencing right now. 

How is machine learning related to artificial intelligence? 

Machine learning is a subfield of artificial intelligence. Right now it is the biggest and the most influential subfield of AI. 

In fact, the way things were, AI was a big field, and there were lots of people pursuing lots of different approaches. And nothing was really working. Machine learning was one of the topics that people have been working on within AI. But then over time, very basic machine learning algorithms started to be useful behind-the-scenes in large companies, such as ad prediction, search and product recommendation. In the last few years, machine learning has become more broadly useful – speech recognition on Smartphones, machine translation, computer vision and self-driving cars – and as an area of AI, it’s enjoying a lot of attention. 

How far advanced are today’s artificial intelligence systems?

Right now artificial intelligence can do certain things, and the things it can do, it’s pretty good at. Namely, we are extremely good at tasks that can be expressed as a mapping from inputs to outputs, where we can collect a very large number of representative, high-quality input-output examples. This is a very broad range of tasks that includes most computer vision tasks, speech recognition, machine translation, spam detection, product recommendation, text summarization, and this list is highly incomplete.  We can almost completely solve any problem for which we can collect a large representative collection of the desired input-output behaviour.

It is also still quite limited in its overall abilities. Our systems are still narrow in that they can only learn from explicitly labelled input-output examples and are unable to learn from indirect experience, the way humans do.  Our current systems are still unable to discover abstract concepts and they do not have the ability to learn over the course of their lifetime, like humans. They are still limited in their ability to understand language and are still weak on creativity. However, it seems likely there will come a day, quite possibly in our lifetimes, when we will build an AI system that is as cognitively capable as a human being in every meaningful dimension. When that happens, such systems – because there will almost surely be many of them – will have an incredible impact on society. 

What is the current research of OpenAI?

We have a number of research goals. Our first main goal is to successfully combine machine learning with robotics. Despite intensive research efforts, the most exciting machine learning algorithms have not yet been successfully integrated with robotics. A successful execution of this goal will result in robots that can achieve useful tasks, such as clean one's room or cook a meal in any house. It is the latter that makes the goal hard.  

Our other goal has to do with developing a system with general problem-solving strategies and common sense that will allow it to achieve a very wide variety of goals quickly. For example, if successful, we will have a system that can learn to interact with almost any piece of software as quickly as a human can. In particular, it should be able to learn to play hard computer games as quickly and as well as a human.  

To accomplish it, we have developed Universe which is a research platform that makes it possible for our machine learning agents to interact with any computer program. By doing so, Universe makes it possible to measure the general problem-solving ability of an AI system. And once we can measure a quantity, we can optimize it.  And we are currently pursuing promising research ideas that could lead to the construction of such systems.

Slightly longer-term, we are also looking to address important aspects of AI safety and security.

Is there a particular focus on deep learning? 

Yes. Because deep learning and deep reinforcement learning, to date, are among the very few methods that reliably produce really exciting results in AI that actually work. So we want to build on that and make them the foundation of our future work. 

But deep learning cannot yet perform all the tasks that we wish computers to perform. We need machine-learning algorithms that will be able to infer much more abstract concepts that are much more general and applicable to many situations from less experience. We need to build systems that never stop learning, and that can learn indirectly – by observing other humans acting, by watching video and from reading text. And finally, our current machine-learning systems are not overly creative yet. 

However, the field of deep learning has the useful property that performance on deep learning systems is rapidly increasing according to all metrics, even though the number of new fundamental ideas that were used to do so is rather small. This suggests that the “learning stuff,” the building blocks of deep learning systems, are in some ways the right components for building high-performing intelligent systems, and we should expect a further acceleration as genuinely new ideas are consolidated and as the paradigms are refined.   

Why was it important that OpenAI be available to partner with multiple institutions and companies?

The world is so large, and there are so many researchers working on many interesting problem, it would be really great if what other people are doing would help us so we could help them as well, and not be insulated within an organization that’s inwardly facing. By working with multiple institutions, we are able to accelerate our progress and the progress of the field by enabling cross-pollination between a greater number of researchers. 

How will OpenAI share its research results? 

In the short-term, we will share all our results and all our code. In the long-term, as we start to develop systems of truly great capability, we will share them in the way that will provide the greatest benefit to the world, whatever that may be.

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Computer Science