Rethinking AI: What if the future is not robotic?

The hype around artificial intelligence has now reached monumental proportions. Worries about killer robots, robots taking our jobs and other concerns have reached a fever pitch within governments, corporations, and universities. On 10 April 2019, Associate Professor Hallam Stevens, from the School of Humanities at Nanyang Technological University, drew on the history of AI to argue that the hype may not be much more than just hype.

He began by commenting on the great deal of enthusiasm for AI, evident in almost every sector within societies. Dr Stevens argued that two main narratives have emerged: a doom and gloom narrative where AI is a threat; and the singularity narrative where the AI we invent will solve all of our problems.

Dr Stevens pushed back against such narratives, saying that these could end up misleading us, and suggested that there are other issues we should focus on instead. In the midst of the hype, we seem to have either collectively forgotten how events have played out in the past or believe that things will turn out differently this time.

In a survey conducted among AI experts in 2016, half of the participants believed that Human Level Machine Intelligence will be achieved in 50 years’ time. A closer look at the results however, showed a spread of opinions implying that we may not really have a good idea when this will happen, if it does at all.

Many of the themes and narratives of AI are actually not new, and even pre-dates modern computers. For example, the play Rossum’s Universal Robots, first performed in 1921, is about robots rebelling against humans and taking over the world.

The hype for AI and worries about the implications of new technology is also nothing new. Hugely inflated hopes and expectations were pinned on AI long ago. These lead to AI winters because overpromising yet eventually underdelivering results discouraged interest and funding for AI research.

The philosopher Hubert Dreyfus predicted that AI was overhyped and overpromised, asserting that the fundamental assumption of AI, that the human brain is like a processing machine, is wrong. He also believed that the world and knowledge are not necessarily encodable in symbols that can be processed. To him, such assumptions will lead to AI’s failure. Subsequent AI winters have shown his predictions to be accurate.

Though some may believe that things will turn out differently this time, Dr Stevens is less optimistic. Such hype cycles are driven by economics, so expectations may be inflated due to those with implicit or explicit financial stake in its success. This is especially relevant today with the many technology companies operating in the modern economy.

There are also “warning signs” or small failures of AI systems that may be enough to burst this hype bubble, such as the recent malfunction of two Boeing 737 planes. Not only could this failure be in terms of technology, but also through the lost of public trust or government and corporate support.

Of course, past failures do not mean history is doomed to repeat itself. However, we are also still unable to predict the future of AI. While things will indeed be different, we don’t know how it will be different.

Dr Stevens raised the important questions of what we are really aiming at, and what does “artificial intelligence” actually mean. This leads to other questions such as: what is intelligence? What kind of intelligence do we want from AI? And how do we know when we get the answer? The idea of comparing AI to our minds is therefore misleading.

This leads to the biggest reason why we may never get AI: we already got it. The moment a machine is able to do a task, the goal posts shift and the machine is then not considered AI. The AI bubble might burst again as a consequence because we cannot seem to achieve it or it takes too long to do so. As such, it is crucial to focus on the problems we are facing now instead.

However, not all AI projects are failures. We might just end up somewhere radically different from where we expected to be. AI cannot experience the world like we do to “embody” or “know” how to do tasks to successfully replicate them.

Dr Stevens concluded the talk by addressing questions from the audience, which included the regulation of algorithms, science fiction and the political aspect of AI.