For a long time now we have not stopped reading that robots will destroy us, that they will take our jobs and more generally that they will be harmful for the human being … Good …

That being said and proclaimed, I will strive to demonstrate that artificial intelligence is no more dangerous than human beings and that AI can give us new innovative opportunities to reach a more just and honourable world.

“Man is a wolf to man” – Thomas Hobbes

Indeed, as pointed out by the British philosopher Thomas Hobbles, in his book “Leviathan”, human being does not need anyone to harm himself. We have the power of nuclear weapons capable of mass destroying our human species and leaving only one Earth plunged into darkness and irradiation for thousands of years. Are these robots that created this? 20 million deaths in 4 years during the 1st World War … the fault of Artificial Intelligence perhaps? Slavery, religion or ethnic wars, pollution, food produced with the reinforcement of tons of deadly pesticides, global warming … and so on and so on, the list of facts showing that humans does not need anyone, and especially not AI to harm humanity, would be very long.

So, that being said, what are we finally accusing Artificial Intelligence and Robots of?

They could destroy us? … hum … it’s not new, we can, as we have seen, very well exterminate ourselves, without the help of robots…

The main argument of those who are constructing a fictitious bogeyman to warn us about the dangers of AI is that robots could become out of control and, why not one day, create a world with tons of Terminators and Skynet Companies. This argument is really “light” I think, and I will try to show you why.

No Terminators if human beings decide it!

And no, Terminator will not be alive without the approval of humans! Let it be said once and for all!

How can we say that?

Well, just because Artificial Intelligence is only “the result” of a human programming (even though AI can actually re-program itself).

Let’s go into some details (understandable by all) and see what comes back:

Imagine that a robot or other AI-Driven entity decides to harm a human being, such as killing him. This robot was previously programmed by humans. If we do not have clauses such that this robot does not have the right to kill a human, then the robot can kill a human … but simply because we grant him permission to do it. That’s here a simple case where it is still the man who is a wolf to himself. The robot there is only as another weapon of additional nuisance among the huge existing arsenal we already own.

More interesting case: Our robot has been programmed to not harm humans … very well … but, this robot (or this AI), being endowed with « intelligence », can’t this artificial intelligence re-program itself to cancel the rule of non-harm to human beings and so, finally, autonomously turn against us to enslave us or wipe us from the map?

This question is one of the main ones asked by those who warn us about the dangers of AI.

But, hey, that simply does not hold water!

Let’s see why and in a very simple way.

So, imagine that our robot decides to turn into a bad Terminator. Great (for him ;). He will thus autonomously erase his initial programming which forbids him to harm us.

But then, let’s imagine: when the robot is going to try to re-program itself to harm us, let’s imagine that the « human being non-aggression programming rules » have been done in a “non-erasable” memory (something similar to a basic ROM so). The robot obviously won’t be able to erase these security rules. We can then argue that the robot can still ignore these rules and act as it sees fit. If this happens, let us then also predict in this non-erasable memory, a rule that would say: « If the robot does not follow the non-aggression rules, then let’s return these non-aggression rules to the robot indefinitely! ». And then this will have the effect of paralyzing and blocking the action of our little too crafty robot-machine, just by a simple « Overfow » loop!

So, finally, we can imagine that our smart robot, having understood this, simply decides to get rid of the chip containing the rules that annoys him. For example by disassembling itself the chip or by turning it off. To avoid this, we can also imagine that this chip, attacked by the robot, triggers an internal mini-bomb that will neutralize this AI from the inside (and obviously, if our friendly robot decides to turn off this bomb, so, of course, our future Terminator implodes too! :).

We can imagine many other more sophisticated and complex security systems but we can see through these small examples that, if we want, we can control AI entities and all become only a human will for using AI to build a better world… and the possibilities offered to us by Artificial Intelligence to make our world better are and will be of course huge, do not forget it!

SHARE
Pierre Pinna
IPFConline Digital Innovations CEO & Speaker Artificial Intelligence Engineer (Natural Language Processing Specialist) Economics of Innovation & most of all: responsible AI must be the norm! And I'll be there to advocate it!

1 COMMENT

Leave a Reply to Benjamin Talin Cancel reply