“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This open letter signed by many AI experts may seem at first glance very commendable.

Yes, at first glance…

But if we take a closer look at this initiative, is this open letter really relevant to move towards a responsible AI?

First of all, this open letter has been published by a non-profit association “aiming to steer transformative technology towards the benefit of life and away from large-scale extreme risks”. Again, obviously at first glance, this organization seems to be moving in the direction of the public good. But still by digging a little… this one was financed from the beginning by our favorite blue bird guy – everyone will recognize him šŸ˜‰ – who is still one of the influential advisors of this association and who also signed this open letter. A very nice man who is recognized by everyone to be of an unstoppable ethics concerning AI, isn’t it ? (am I being ironic here ? … no no,… um… just a little…).

So, already, this open letter doesn’t start very well in favor of an “ethics” in the field of artificial intelligence research…

Then… it is asked here to make a pause on the research on more powerful models than GPT4. Ok. Why not…

Um, ok, now, let’s then dig a little deeper into this topic…

What is the problem with GPT4 (and I would also say with the other models called “generativeAI”)?

Clearly, there are many problems and they are worrying, of course!

For example,

– the creation of misleading or malicious content, fake news and political propaganda

– the perpetuation and accentuation of all kinds of biases (social, ethnic, religious, gender, …) due to the very nature of these models based on Deep Learning and its big problem of generating biases induced by what is commonly called the “black box” of Deep Learning models

– and also the fact that humans may eventually “think and create less” by relying on these generative AI models

– and other considerations such as literary, academic or artistic plagiarism.

So, obviously, this pause here requested seems quite relevant.

Except that …

Several legitimate questions may appear here…

First of all, let’s assume that this request for a pause is followed. What would be its effects then?

Let’s imagine a pause of 6 months, or even a year, during which tools for controlling and regulating models, such as generative AI models trained on huge amounts of data, could be developed. Ok. Fine.

But at the end of this break, what will happen?

Research will continue in the same way as today, i.e., controlled by a small group of powerful tech companies and universities, whose goal will always be to produce as many papers as possible in order to get as many subsidies as possible. And the regulatory tools will inevitably be circumvented and so then… we take another 6 month break? A break every year, every two years?

That doesn’t sound very serious, does it?

But then, what can we do, because this open letter obviously raises real issues?

A call, an open letter for a change of philosophy concerning ALL (and not only the too restricted field of “AI systems more powerful than GPT-4”!) research around AI. What could this radical change of philosophy consist of, which seems inevitable if we want to move towards responsible AI systems?

Well, it would be good to ask ourselves from the beginning ofĀ a research: “What will this research be used for? Will it serve to lead to models for the public good or not?

More precisely: First, before any research, let’s define one / more concrete objectives for the improvement of living conditions in our societies (social, ecological criterias for example) and let’s examine if this research respects these criterias. If it does, it’s OK, if not, let’s move on!

To summarize: let’s build together one or more independent ethics committees (multi-domain ones and not only composed of AI technicians, and diverse in social categories, gender and communities) for all AI research (like international organizations such as the International Atomic Energy Agency (IAEA) for regulating and controlling nuclear energy),

committees which, according to social and ecological progress criterias, coupled with a robust interpretability of AI research models -a just essential point to avoid discriminatory bias-, will give the green light, or not, to research projects.

In this, I completely subscribeĀ for example to the approaches and visions of responsible women as Timnit Gebru,Ā Alex Hanna, from the DAIR Institute, or Mia Shah-Dand, who advocate for a “slowing down”of AI, and for a long time now!

In the end, a pause in AI research, no, not enough, do more!, but a genuine change of direction happening quickly, yes, totally yes!

______________________________________
Image by Freepik
SHARE
Pierre Pinna
IPFConline Digital Innovations CEO & Speaker Artificial Intelligence Engineer (Natural Language Processing Specialist) Economics of Innovation & most of all: responsible AI must be the norm! And I'll be there to advocate it!

LEAVE A REPLY