Artificial Intelligence

A New Concept in AI: Transparency That Isn’t Transparent – Amazing, isn’t it?

Ask ChatGPT how it “thinks”, how its new models reason, models specially designed to improve the transparency and interpretability of the results obtained… Well, none of that here! Thank you, but there’s nothing to see here! You’re banned now!


I read this article in Wired yesterday:

OpenAI threatens to ban users interested in its “strawberry” AI models. If you try to understand how OpenAI’s o1 models solve problems, you may get a nasty message.” https://www.wired.com/story/openai-threatens-bans-as-users-probe-o1-model/

And, while it may seem a bit “anecdotal”, in fact it’s not “anecdotal” at all, because in reality this is really a textbook case for understanding some of the important issues in #AI today.

OpenAI has warned users about its latest AI model, “Strawberry”, advising them not to explore its internal mechanisms in order to avoid possible sanctions up to and including a ban on using ChatGPT and its evolutions. Just typing the words “reason” or “reasoning” triggers an alert, which is scary isn’t it?!? ;))

Curiosity is an ugly fault…
So what can we learn from this (admittedly rather… “obscure” ;)) move by OpenAI?

Opacity and strict process control, a (big) step backwards in terms of transparency in Artificial Intelligence.

OpenAI has long been a fervent advocate of transparency in the development of AI models (their name says it all, OPENai…) Those days are definitely over for them!

It’s a good illustration of the struggle between advocates of open and accessible AI (often more or less “open” in reality…) and those who, under the guise of strict control over the security of their models, advocate a high degree of opacity in design that prevents independent external security audits of their models – the snake that bites its own tail, as it were.

The evolution of OpenAI towards a more controlled approach to AI sheds light on the present and near-future development of artificial intelligence. However, this evolution towards the pursuit of profitability by “protecting” one’s “know-how” could hinder the progress of other stakeholders and limit the possibilities for improving AI in general, and hinder progress towards more responsible systems and models by limiting the collaboration needed to advance the state of the art, as has always been the case in all sciences. The AI community seems as divided as ever on the question of whether non-open AI promotes safety or rather stifles the progress of AI models. The debate on AI transparency has only just begun, but what’s certain is that by closing down its models and therefore stifling collaboration, we’ll never manage to build an ecosystem of #ResponsibleAI, a “semblance” of ethical AI, just a semblance

Collaboration, (not inward-looking attitudes… not only the sciences, but also past and contemporary human history, provide abundant examples of this…) has always been, and will undoubtedly remain for a long time to come, the key to progress for the public good.

… COLLABORATION is just the Key!

Collaboration, open/closed models

Pierre Pinna

IPFConline Digital Innovations CEO & Speaker Computer Science Engineer (AI-Natural Language Processing Specialist) Doctorate-level degree in Innovation & Management of New Technologies