So, why this title? Simply because artificial intelligence raises questions at all levels of society. As much for the experts as for the Boeotians.

Will robots take our work? Will AI ​​destroy the human being and dominate the Earth? But also, how to overcome the current limits of Deep Learning and how to explain the decisions taken by the AI ​​Systems, decisions from a so called “Black Box” that does not inspire much confidence …

All these questions show the anxiety (logically legitimate: the human being and the living species in general are not appreciating so much the novelty and the uncertainties which result from it) concerning in general the advent of a new digital era, in which Artificial intelligence (and Big Data) is the engine.

This article is intended to see things a little clearly in the questions posed above.

Will the robots take our work?

The raw answer is going to be pretty simple: YES! But it is obviously necessary to express nuances (very important!) through this answer. In the sense that we will have to reason in the short, medium and long term (a principle similar to all major innovations that have transformed the history of our humanity).

Short term (2 years): AI will create new jobs while destroying much less.

In the medium term (5 years): The phenomenon Jobs Destroyed / Jobs Created will be balanced.

In the long run: AI ​​will remove more jobs than what it will create, the new ecosystem has been put in place, but in fact it will not be catastrophic (and quite the contrary!) if and only if during this decade, our entire economic, social and educational system will be adapted to make this ecosystem sustainable.

Will AI ​​destroy the human being and dominate the Earth?

Scary question … woooo! … but finally a lot of fun!

We really have to forget the science fiction movies (even if I love them, oh yes 😉 and other Terminators … We are currently far from an AGI (Artificial General Intelligence) that could surpass us and thus perhaps eliminate us from the surface of the Earth to better satisfy its thirst for domination. So, finally, even if an AGI is more like a phantasm than a reality, eventually one day will the robots destroy us? To this question, I will issue another question: one day will aliens come to invade us and liquidate us? … Everything is possible, of course, but for now, we can still sleep like a dog! 🙂

How to go beyond the current limits of Deep Learning?

Even if the current techniques used in Machine Learning and more particularly in the Deep Neural Networks (which are the heart of Deep Learning) already offer incredible results in many areas of our daily lives, to surpass the results obtained by humans in areas as diverse as Medicine, Marketing, Finance, Industry …, despite this, there are several well-known barriers that are holding back the development of the AI. Among these obstacles, the most important are:

– The clarification of the decisions taken by AI Systems designed as “Black Boxes”

– A better understanding of the brain (human or animal) in order to invent new algorithms inspired by the collaboration between neuroscientists and AI experts.

      The Black Box

Because of their structure, neural networks constitute a kind of “black box”. To simplify, these deep learning systems can make up to millions of billions of operations in order to reach a result. (for example Convolutional Neural Networks that process images and videos, which is essential for autonomous vehicles, among others) So, we understand more easily why it is very difficult to explain the procedure used to reach the result! For some of the most prominent AI experts, including Yan LeCun, not being able to explain an AI system decision is not a problem. Why? Simply because the human being and his brain also work as a black box and this does not prevent us from respecting the decisions made by humans. The only important thing is so the efficiency of the result! If a machine is recognized as more relevant than human specialists in the field, then why doubting its decisions?

This point of view can obviously be defensible. But we can oppose the fact that currently the human is used to deal, to work, with other humans and their “human black box” (and this since the beginning of humanity!). But we must recognize that working with an “artificial black box” is not yet in the customs, being quite recent!

Thus, for example, a doctor will have a hard time accepting a decision of an AI without knowing how it has been taken! Likewise for the denial of a credit or any other case where the explanation of an important decision is crucial for it to be accepted.

Regarding the techniques that can be used to crack the black box, we can cite the reverse engineering (we start from the result to go back to the beginning to dissect the steps that produced the result) and the different other possibilities will be examined in my next articles.

And much research is currently underway in this direction, including IBM, for example, which is making great efforts to explain the decisions made by its AI systems, including IBM Watson for Heath Tech.

      Collaboration

Collaboration between neuroscientists and machine learning specialists is now essential to gradually create new algorithms that will be less complex to explain and closer to the functioning of «living neural networks». The blockchain can then be very effective for global sharing of this knowledge in order to not remain the property of a limited number of people.

In the end … the human being will be still and only the master of his destiny … for the worse or for the best! Everyone will choose their camp 😉

SHARE
Pierre Pinna
IPFConline Digital Innovations CEO & Speaker Artificial Intelligence Engineer (Natural Language Processing Specialist) Economics of Innovation & most of all: responsible AI must be the norm! And I'll be there to advocate it!

LEAVE A REPLY