Demis Hassabis doesn’t look like a man who’s about to destroy life as we know it. He’s slight and mild-mannered, with black-rimmed glasses and a smart jacket. But the stated ambition of his company, DeepMind is nothing less than to “solve intelligence”. That makes it central to a heated debate about the potential dangers of technology. Critics including Stephen Hawking have issued warnings about the long-term consequences of artificial intelligence (AI) research. Are we building machines that will ultimately take over the world?
Hassabis defines AI as “the science of making machines smart”. Actually, it seems to be more about making machines that can make themselves smart. Hassabis draws a distinction between “narrow” AI – programs that do one thing well – and the kind he’s interested in, which are “general-purpose learning machines” that can move from one task to another.
A winning part of Hassabis’s talk to a joint Royal Television Society/Institution of Engineering and Technology audience this week was a series of short videos of how his early AI programs improved at various old Atari video games. The programs were set the task of maximising their scores on each game, and were then run with only the visual information of the game on-screen to work with. Amazingly, on games including Space Invaders and Breakout, the AI ‘learnt’ how to do well, and before long could do much better than the average human user. In one case, the program discovered a winning strategy that the scientists themselves didn’t know about until they saw the AI program demonstrating it.
It was clearly a brilliant piece of work but was it really the start of what Hassabis likes to call an “Apollo programme for AI”, in which “solving intelligence” is the first step in a mission to “solve everything else”? It’s easy to be sceptical, but in the tech business, there’s a history of brainy founders with simple-to-understand but seemingly impossible missions who have gone on to pretty much achieve them. Think of Bill Gates and his “a computer on every desk and in every home”, Page and Brin’s “organising the world’s information and making it universally accessible and useful” and Mark Zuckerberg’s “making the world more open and connected”.
But if Hassabis and his “solving intelligence” is ever to join that illustrious club – and he admits his own catchphrase is ambiguous – he’ll be different from the others in at least one important way (apart from not being American). He sold his business to Google last year, so the established trajectory – idea leading to users, leading to profits, leading to the creation of a massive new business – won’t apply. He’s already used Google’s money to hire what he claims are a hundred of the best AI scientists in the world. DeepMind is clearly not a business struggling to turn the corner to profitability: it sounds more like an extraordinarily well-funded university department.
So what are they all doing? Well, they’re taking inspiration from Hassabis’ previous research as a neuroscientist (yes, a man of many talents), to work on “neuroscience-inspired AI”. That means looking at things like memory, attention, concepts, planning, navigation and imagination. Hassabis talked about how he’d researched imagination in both humans and in rats – the latter in an ingenious experiment that involved tapping into rats’ brains while they slept to find out what they were thinking or dreaming about after they’d been exposed to a deliberately frustrating experience. The idea was that rats were seen to be imagining what they’d like to have done. The equivalent question in AI is how a program could be built to be able to plan, or imagine, a strategy and to evaluate it before carrying it out. Was it my imagination, or was there a nervous shuffling among the TV professionals as Hassabis suggested that machines would soon be able to come up with their own ideas? Perhaps sensing that, he assured the audience that “we’re a long way from machines being truly creative”.
The subject came up again in questions afterwards, and again he insisted “I think most people’s jobs in here are safe for a very long time.” (He obviously doesn’t know the British TV business.)
But any idea that DeepMind is going to be some kind of high-tech ivory tower was dispelled when someone asked about how AI might be used to improve online recommendation systems. Hassabis said he was already interested in the idea and that current systems are “not good enough”. It sounded like a rather modest start to the revolution, but Hassabis also talked about how there was another media-related area he was working on that was “pretty promising” – and that was music composition and analysis. But he was most enthusiastic about the application of AI to healthcare, which he said his business was focusing on, since medicine was still organised according to a “19th Century model”.
Hassabis appeared genuinely keen to make the world a better place, claiming that AI “could be the greatest thing for humanity”. His company has its own ethics committee to debate the kind of issues its critics are worried about – though he said most of the critics don’t actually work in AI themselves, the implication being that they don’t quite know what they’re talking about. No doubt he wouldn’t include Stephen Hawking in that. He said he’d had a good long chat with Hawking, after which he thought Hawking had been reassured.
As for media workers, well, intelligence isn’t all they need, he said; aesthetic judgement is required too. The good news is that aesthetic judgement isn’t even on Hassabis’s current list of mental processes to investigate.
Courtesy Cyber Security Intelligence.