The greatest advances in AI: the experts’ view Five AI researchers weigh in on the most significant advances in the field – and what is yet to come By Sweitze Roffel and Ian Evans July 9, 2018

In recent years, the discussion around artificial intelligence has gathered momentum. At Elsevier, forms of machine learning are used to help researchers, engineers and clinicians do their work more efficiently. It can help them find funding, bring them the right information when they need it, and support the treatments they give patients. For example HPCC technology, developed by our sister company LexisNexis, enables us to process massive amounts data extremely quickly to provide answers to research, engineering and patient care questions.

For this story, we asked some of the leading researchers in artificial intelligence research what they considered to be the most significant advances in the field – and what is yet to

Prof. Dame Wendy Hall, University of Southhampton: “Whilst trying to solve the … question of whether we can create machines that think and act like humans – we’ve created some very smart tools.”

Dame Wendy Hall is Professor of Computer Science at the University of Southampton, Executive Director of the university’s Web Science Institute and Managing Director of the Web Science Trust.

Prof. Dame Wendy Hall, PhD (Photo by The Web Science Trust CC BY 3.0 via Wikimedia Commons)

“One of the interesting things about advances in AI is that whilst trying to solve the ‘general AI’ problem – which is the much bigger question of whether we can create machines that think and act like humans – we’ve created some very smart tools. So we’ve seen a lot of advances around things like facial recognition, speech translation, automation of services. Machines have become much better at handling data and learning from it than we are. Things like face recognition have come such a long way in the last 30 years. That kind of incremental development in an in-depth application has been an amazing development.”

Knowledge representation

When a person considers a problem, they can reason on it without having to take action. They can consider the steps in changing a bicycle tyre without having a bicycle to hand, with the concepts of wheels, tires, inner tubes standing in for the actual physical objects.Knowledge representation and reasoning allows computers to reason in a similar way.

This field of artificial intelligence is dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language.

Prof. Virginia Dignum, Delft University of Technology: “The greatest advances are probably the ones we haven’t done yet.”

Dr. Virginia Dignum is Associate Professor of Social Artificial Intelligence at Delft University of Technology in the Netherlands. She writes:

“The greatest advances are probably the ones we haven’t done yet. At the moment we depend too much on the stochastic/probabilistic approach to AI. We’re taking correlation to be infinite, and that is not the way we’re going to create intelligence. People need to use causality abstraction and other kinds of mechanisms we don’t yet have in a way that’s scalable and usable in a big way. That’s the next big thing.”

“In terms of what’s already happened, I’ve been in AI for more than 25 years, and the big advances are in the knowledge representation area – which we’re now not using. We’re going back to the sub-symbolic and probabilistic approach. We’re ignoring a lot of the work in semiotics and knowledge representation – we should probably go back to that. If you look at another area of AI – the semantic web, which is where knowledge representation meets the web – that’s hardly used in industry and is a missed opportunity.”

Prof. Gary Marcus, NYU: “It’s like asking me in 1600 what the greatest advance in chemistry was. …”

Dr. Gary Marcus is Professor of Psychology and Neural Science, New York University, former CEO of the machine learning startup Geometric Intelligence, acquired by Uber in 2017. He writes:

“A lot of the best progress was made early on, when people figured out some of the foundational stuff. People figured out the basic logic of doing symbol manipulation, which is fundamental to doing search, for example. The neural network stuff that was figured out in the 80s, but which has a longer history, is clearly very useful for a whole bunch of problems such as categorization.

“But there are a whole raft of advances that we haven’t done yet. To put it in perspective, it’s like asking me in 1600 what the greatest advance in chemistry was. I don’t know – in many ways in AI we’re still trying to do alchemy.”

Neural networks

An artificial neural network is one group of algorithms used for machine learning that models the data using graphs of artificial neurons, somewhat inspired by the neural networks of the brain. These systems improve their performance by considering examples. In image recognition, they might learn to identify images that contain faces by analyzing millions of example images that have been manually labeled as «face» or «not face» and using common characteristics (such as light levels around eyes, skin tones, recurring shapes) to identify faces in other images. They do not need to have existing information about the subject and instead develop their own set of relevant characteristics from the example sets they process.

Prof. Stuart Russell, UC Berkeley: “We need a more organic form of combining perception and reasoning.”

Dr. Stuart Russell is Professor of Electrical Engineering and Computer Sciences at UC Berkeley and Adjunct Professor of Neurological Surgery at UC San Francisco. He writes:

“The biggest contribution that AI has made is this notion of knowledge-based systems that have internally-represented knowledge and programs that operate on that knowledge to do reasoning.”

“That’s occurred in many different forms in logic and probability theory, but it’s pretty unique to AI. The current interest in machine learning is exciting, but there’s a core question: What’s the purpose in asking a computer to identify Buicks and daisies and German shepherds if it can’t reason with that knowledge and say, ‘That’s a Buick; maybe it’s a police car.’ There’s almost no point in learning to recognise objects unless you know what’s in front of you and you can reason on that basis. Knowledge-based systems allow that.”

“The hope when we were working on neural networks in the 80s was that we could connect reasoning systems to cameras and speech so that they could perceive the world naturally. That hasn’t happened – our systems are very good at seeing, but they don’t perceive, although you see it in very primitive forms. The first generation of Google cars had perception based on deep learning – it treated the output of that perception as an incontrovertible truth to feed a 1970s-style rule-based system saying, ‘You should go into the left lane now.’ You need billions of rules before you get a robust performance. We need a more organic form of combining perception and reasoning. People have a feedback loop between eyes and brain – the brain doesn’t just react to what the eyes see, it also governs what we perceive, what we recognise has just happened and is going to happened, what we can ignore, what we pay special attention to. We don’t really have that right now in AI systems.”

Knowledge-based systems

A knowledge-based system is a computer program that reasons and uses a knowledge base to solve complex problems. To do that, it has an extrinsic worldview built into the system that allows it to infer conclusions from data. For example, a marketing company will want to target different advertising to people depending on which stage of life they’re at. If a user uploads a video of themselves on their wedding day, image identification can recognise a wedding dress. However, a knowledge-based system would have the extrinsic knowledge to infer that a wedding dress means a wedding, which is a life-changing event that alters people’s buying habits, and can change the advertising that that person sees accordingly.

Elizabeth Ling, Elsevier: “AI is  already used in many systems in society. … They just don’t look like people expect.”

Elizabeth Ling, SVP of Web Analytics at Elsevier, writes:

“Deep learning, as everyone knows, is a hot topic. We’re using it quite a lot at Elsevier. What’s been exciting in the last 10 years has been the way the research on patterns has progressed and how that’s affected computer vision. That’s probably the area where deep learning has had the biggest impact.You see it in self driving cars, but in medical imaging, that same process can be used to identify whether you have certain types of cancer with much greater accuracy. Taking that kind of image extraction and linking it to natural language processing and then applying it to a health problem is something I find very interesting.”