Today’s AI is not much better at solving real world problems than its ancestors

By ZIA CHISHTI  Chairman and Chief Executive of Afinity

‘Blade Runner 2049’ was released 35 years after its predecessor predicted the development of sentient robots — a prophecy that remains a fiction

Expert systems, neural networks, and predictions of generalised intelligence are everywhere. Venture capitalists are rushing to fund start-ups with the magic letters “AI” grafted on to their PowerPoint presentations. Consultants and chief executives endlessly proclaim the need to embrace artificial intelligence and its sibling “ big data”.

We have seen this movie before. Fifty years ago, “chatterbots” such as Eliza and Parry were heralded as victories on the march to generalised intelligence. Complex neural networks were considered primitive but reasonable models of human brain functions. And on the back of intense interest in AI, Ridley Scott’s 1982 film Blade Runner projected a dystopian future in which artificial lifeforms exceeded human intelligence and strength. Surely humanity was about to give birth to a digital Adam and Eve?  Of course, it did not happen. We have not moved a byte forward in understanding human intelligence. We have much faster computers, thanks to Moore’s law, but the underlying algorithms are mostly identical to those that powered machines 40 years ago.

Instead, we have creatively rebranded those algorithms. Good old-fashioned “data” has suddenly become “big”. And 1970s-vintage neural networks have started to provide the mysterious phenomenon of “deep learning”.  But didn’t IBM’s Watson beat human contestants on the television show Jeopardy?  Surely that’s revolutionary?  More to the point, however, is Watson’s commercial success.  It was reported by Stat that IBM internal documents indicated that Watson had recommended “unsafe and incorrect” cancer treatments.  It turns out that going from Jeopardy to cancer diagnosis is a bit harder than IBM thought.

Another contender for the AI crown is Google with DeepMind. In a salute to IBM’s strategy of throwing hundreds of engineers at flashy victories in deterministic games, DeepMind trumpeted the triumph of its AlphaGo program over the world’s top player of the ancient strategy game Go. This was somehow supposed to be an even greater victory than those of Deep Blue, the chess-playing computer that beat Garry Kasparov in 1997, or Watson.

But let’s consider a case where DeepMind has attempted to extract commercial value. With much fanfare, the UK’s National Health Service employed DeepMind in 2015 to improve patient outcomes and reduce costs. Three years later, an audit of the DeepMind project for acute kidney injury, known as Streams, conducted by the law firm Linklaters said: “Without intending any disrespect to DeepMind, we do not think the concepts underpinning Streams are particularly groundbreaking.” In short, DeepMind is providing nifty graphical interfaces to existing NHS algorithms.

The fact is that the latest incarnations of AI are not much better at solving real world complex problems than their ancestors were three decades ago. Heavily-tuned systems can win at narrow challenges such as Go, chess, or Jeopardy. But diagnosing a disease from the myriad and conflicting symptoms a human being can exhibit is currently beyond them.

So what is AI good for? What it has always been good for: the identification of patterns in complex data. Medical image anomaly detection, hydrocarbon detection, consumer behavioural prediction and fraud detection have all benefited from advances in computational capacity. These all share two things: large volumes of well-structured input data and well-defined endpoints.

How could companies benefit?

First, we should stop investing in generic AI and big data projects.  Consultants love these, but the vast majority of such investment is a waste. The question we ask ourselves at my technology company, Afiniti, is this: “Can the CEO understand in five minutes or less how we add value?” If there is not an exceptionally clear business case for the pitch, then it is almost certainly not worth pursuing.

Second, chief executives must personally drive AI projects. Delegation is a recipe for failure.

Similar principles apply to investing in AI. Investors should stop funding companies that simply tack AI on to their pitches. Many companies merely rebrand pre-existing algorithms as AI or else just purchase off-the-shelf AI “tools”. Investors should focus on fundamental questions. What problem does this use of AI solve? How do you measure outcomes? How does the AI provider capture its share of value? These questions must be answered convincingly for an investment to succeed.

Blade Runner 2049, a sequel to the 1982 film, was released last year amid intense hype around AI. Here again is a movie which conjures, as its predecessor did, a dystopian future in which humans are supplanted by sentient robots.

But we might recall that the term “AI winter”, signifying a dramatic slowdown in interest and investment, was coined by the American Association for Artificial Intelligence in 1984 — just two years after we saw the original “blade runner”, Deckard, fall in love with an artificial lifeform named Rachael.

The writer is chairman and chief executive of Afiniti

Source: Financial Times