The Future of Abstract Artificial Intelligence

This blog post was originally published by Bitfury. It is reprinted here with the permission of Bitfury.

AI is having a significant and incredible impact on the world around us. Recently, a research center applied artificial intelligence to X-rays of lungs to detect cancer and COVID-19 even earlier, serious strides are being made in fully autonomous driving systems, and several pharmaceutical companies are using AI to accelerate drug and vaccine development. From just these examples alone, it is clear that AI is improving our lives faster than any previous technology.

However, if you look closer at the state of AI right now, there are two major obstacles to its larger adoption:

  • Scope: AI’s applications are very narrow at the moment — they are optimized to solve specific tasks, in specific scenarios but cannot be used in general tasks
  • Learning: Today’s AI need a massive amount of data to be trained and in many cases must be supervised to ensure that they are performing correctly.

For these reasons, we are very far away from widespread and holistic use of artificial intelligence. Unfortunately, these systems are just not able to complete generalized tasks, and in many ways, humans still outpace AI in being able to solve complex puzzles that require common sense and abstraction. In fact, by changing just a few pixels in a photo or inserting “noise” into the image, we can completely stump an AI application.

In general, most of the techniques developed for AI come from observing the human brain. The neural networks used by AI is a simplified model of our mind. And yet, AI applications cannot work like our brains can. Researchers Yann LeCun and Yousha Bengio, co-winners of the Turing Award with Hinton for their contribution to deep learning, believe that we need to observe how our brains work at the higher levels of abstraction. They hypothesize that there may be a few simple rules that enable intelligence, just like how the rules of physics define the physical world around us. If we could discover these rules, we could use them to unleash the full power of AI to tackle abstract tasks.

Abstract AI

Bengio is exploring the creation of a “System 2” for AI. System 2 is based on the idea that our brain is split into 2 sub-systems: an autonomous system and a conscious system. This theory (popularized by Daniel Kahneman and Amos Tversky in their book, “Thinking Fast & Slow”) says that system 1 helps us walk, move, drive and react to the world in fast decisions, while system 2 helps us communicate, plan and reason in “slower” decision-making. If we could design AI and neural networks more along this design, it is possible they could “learn” concepts, generalize them, transfer them to other applications and even evolve themselves to be more effective.

For this to be possible, we need to build a better model of the world around us and upgrade the language we use. Today’s AI applications recognize a chair by studying labeled pictures of chairs; but what if we could teach the AI that a chair is anything that I sit on? This means we’d have to embed in the AI the capability to understand a context and a larger scope of the action and meaning of the concept. This kind of design would help our AI systems develop what we call “common sense,” which is the high level of abstraction that we develop throughout our lives. At this level of abstraction, we can disentangle basic facts, variations/specific situations, and use all this information all at once to easily determine the answer to multiple problems in similar but different contexts.

A good example of this is a child burning his hand on the stove. Upon touching a hot stove, the child immediately realizes/“abstracts” that the cause of the pain is not the stove itself, but the heat generated by the stove. From just that moment, he can learn not to touch any warm thing, thus transferring the lesson he’s learned to a new context.

To build this kind of thinking into AI, we need to design better models for learning and memory development; and build up AI consciousness.

Better Data and Memory

AI will need to be able to explore the world around it spatially, and temporarily evaluate actions — just like we do when we explore the world we live in. While exploring the world, AI needs to develop the capability to focus only on specific elements of a larger set of data and ignore other information. Right now, the models we use available are extremely narrow, dependent on the training data and unable to do this.

We also need to develop a good model of associative and episodic memories that we can then replicate in AI, but this has not been successful so far.

Consciousness

The other critical aspect here is consciousness. Consciousness is what gives humans to ability to decompose concepts, understand complex topics and have selective attention. Consciousness gives us the ability to sequentially focus on different aspects and be aware of the right things at the precisely right moments, eliminating all the other “noise.”

If we break down consciousness into 3 main computational aspects, we see that:

  • Our conscious attention allows us, while conscious, to focus on specific elements which condition our planning, the actions we take, and build our imagination;
  • Our self-consciousness conditions the decisions we make;
  • And we have subjective perception, which is the focus of our conscious attention and allows us to be present in the high-level abstraction space which is developed from our experiences, goals and emotions

The implementation of these facets inside AI systems should help us develop a conscious system with a high level of abstraction that is good at generalization and fast at learning new concepts. It will allow AI to “decide on the fly” and define sequences of actions that are dynamically constructed (all based on the attention trigger the AI receives and the memory it can access.)

Next Steps

For us to build these models for AI to learn from, we first have to drastically simplify the representations we make of our world. We need to decompose knowledge into small pieces, and then bring it back together so an AI can analyze it and use it to achieve goals. Key to this, according to Bengio, will be:

  • Further development in meta-learning, which learns from metadata starting at a higher abstraction level and allowing faster adaptation to new scenarios
  • Implementation of interacting mechanism at a higher level of abstraction
  • Introducing high level causal variables and their dependencies

Each of these will help us reduce the complexity of how we represent the world around us, which (in theory) should help us unleash a far more powerful AI that can think abstractly and handle generalized tasks. This will lead to a brand new wave of discoveries that will shape our worlds for the next decade and beyond.

Machines will become more intelligent, as we find a way to integrate both System 1 problem solving and System 2 consciousness, reasoning and abstraction. They will have broader goals, will expand their action fields helping human race to improve the world around us. It’s now up to governments and institutions to be sure that artificial intelligence will be fair, accessible by anyone and beneficial to each human being. Real progress happens when AI can be used to benefit the lives of everyone.


References & Learning More

About Abstract Learning:

YouTube: These Natural Images Fool Neural Networks

YouTube: Tesla on Autopilot Crash: important to understand how we’ll investigate causes of crashes like these in the future

About the Researchers:

Talks by Yann Lecun — his public Google Drive

Presentations by Yoshua Bengio

Terms:

Metadata For Dummies

Definition of Metadata (Wikipedia)

About Meta Learning

Fabrizio Del Maffeo
Head of Artificial Intelligence, Bitfury

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top