Strachey Lecture – “Artificial Intelligence and the Future” by Dr. Demis Hassabis

For this week’s group meeting, some of us had the pleasure of attending a very interesting lecture by Dr. Demis Hassabis, founder of Deep Mind. Personally, I found the lecture quite thought-evoking and left the venue with a plethora of ideas sizzling in my brain. Since one of the best ways to end mental sizzlingness is by writing things down, I volunteered to write this week’s blog post in order to say my peace about yesterday’s Strachey Lecture.

Dr. Hassabis began by listing some very audacious goals: “To solve intelligence” and “To use it to make a better world”. At the end of his talk, someone in the audience asked him if he thought it was possible to achieve these goals (“to fully replicate the brain”), to which he responded with a simple there is nothing that tells us that we can’t.

After his bold introductory statement, Dr. Hassabis pressed on. For the first part of his lecture, he engaged the audience with videos and concepts of a reinforcement learning agent trained to learn and play several ATARI games. I was particularly impressed with the notion that the same agent could be used to achieve a professional level of gaming for 49 different games. Some of the videos are quite impressive and can be seen here or here. Suffice to say that their algorithm was much better at playing ATARi than I’ll ever be. It was also rather impressive to know that all the algorithm received as input was the game’s score and the pixels on the screen.

Dr. Hassabis mentioned in his lecture that games provide the ideal training ground for any form of AI. He presented several reasons for this, but the one that stuck with me was the notion that games quite often present a very simplistic and clear score. Your goal in a game is usually very well defined. You help the frog cross the road or you defeat some aliens for points. However, what I perceive to be the greatest challenge for AI is the fact that real world problems do not come with such a clear-cut, incremental score.

For instance, let us relate back to my particular scientific question: protein structure prediction. It has been suggested that much simpler algorithms such as Simulated Annealing are able to model protein structures as long as we have a perfect scoring system [Yang and Zhou, 2015]. The issue is, currently, the only way we have to define a perfect score is to use the very structure we are trying to predict (which kinda takes the whole prediction part out of the story).

Real world problems are hard. I am sure this is no news to anyone, including the scientists at Deep Mind.

During the second part of his talk, Dr. Hassabis focused on AlphaGo. AlphaGo is Deep Mind’s effort at mastering the ancient game of Go. What appealed to me in this part of the talk is the fact that Go has such a large number of possible configurations that devising an incremental score is no simple task (sounds familiar?). Yet, somehow, Deep Mind scientists were able to train their algorithm to a point where it defeated a professional Go player.

Their next challenge? In two weeks, AlphaGo will face the professional Go player with the highest number of titles in the last decade (the best player in the world?). This makes me reminiscent of when Garry Kasparov faced Deep Blue. After the talk, my fellow OPIG colleagues also seemed to be pretty excited about the outcome of the match (man vs. food computer).

Dr. Hassabis finished by saying that his career goal would be to develop AI that is capable of helping scientists tackle the big problems. From what I gather (and from my extremely biased point of view; protein structure prediction mindset), AI will only be able to achieve this goal once it is capable of coming up with its own scores for the games we present it to play with (hence developing some form of impetus). Regardless of how far we are from achieving this, at least we have a reason to cheer for AlphaGo in a couple of weeks (because hey, if you are trying to make our lives easier with clever AI, I am all up for it).

Author