Artificial Intelligence

… About Biases in Artificial Intelligence…

AI “neophytes” often ask me these questions: “Pierre, you’re a specialist (OK! :)))) what are the biases in artificial intelligence and why are there biases, hallucinations in ChatGPT for example, where do these biases come from and how do we get rid of them?”

The last time I was asked these questions was just today! And here (in a nutshell, of course) was my answer:

AI systems such as chatGPT, Large Language Models in general and any other Machine Learning-based model are all built on a statistical model. They explore a set of human data (very vast in the case of LLMs, for example) and determine a result in relation to this human data. There’s no reasoning here, just a probability that the answer provided is right, just the most likely word to follow in the sentence, word found in the vast database. So, if the human data is biased, the result of the AI trained on it can only be biased, that’s just logic here. Hence the importance of having “quality” data, unbiased or at least as unbiased as possible. And as far as ChatGPT LLMs are concerned, the chatbot GPT-4o, the updated version of the underlying large-scale language model technology that powers ChatGPT, was trained on a massive corpus of textual data, around 570 GB of datasets, including web pages, books and other sources, which is colossal and obviously human biases are aplenty in these datasets.

So AI biases are just human biases. So there are two ways of eliminating, or more realistically at least minimizing, the biases present in AI models: either: we work on the data to “debiase” them as much as possible, which takes a lot of time and therefore requires a lot of financial resources. In any case, it only depends on the human will to want to take this path towards a more responsible AI… but this is unfortunately not the path that Big Tech is taking at the moment… or: eliminate human biases… and there… you’ve already got the answer… and it should also be noted that AI models also tend to accentuate the human biases they encounter… hence the even greater need to put in the means and the will to build AI systems that are transparent, reliable and interpretable, what we might call “Responsible AI”.

And finally, just one story that has just come to light and which shows just how persistent human bias is in our societies… Breaking news today that artist Marjane Satrapi refused the French nation highest distinction, the Légion d’Honneur, because of “France’s hypocritical attitude towards Iran”. The Franco-Iranian cartoonist and film director says she doesn’t understand “France’s policy towards Iran”, particularly when it comes to granting visas. The artist evokes a “mark of solidarity with Iranians, especially with women and with Iranian youth”.

And she adds:

“Supporting the women’s revolution in Iran cannot be summed up by photos with victims or celebrities at commemorations for the death of Mahsa Amini”, a young woman arrested for breaking the Islamic Republic’s strict dress code, whose death sparked a vast protest movement in 2022.

“Iranians don’t need communication, we need concrete action”, argues Marjane Satrapi., while wishing “that France remains true to itself” , that France remains faithful to its known for values of defending human rights, with equality and justice.

“For some time now, I’ve found it really hard to understand France’s policy towards Iran”, she continues, lamenting the fact that “young freedom-loving Iranians, dissidents and artists are denied visas”, including tourist visas, while the children of “Iranian oligarchs” “stroll around Paris and Saint-Tropez with no problem whatsoever”…

… a typical example of human and our societies bias… sadly among many others…

Pierre Pinna

IPFConline Digital Innovations CEO & Speaker Computer Science Engineer (AI-Natural Language Processing Specialist) Doctorate-level degree in Innovation & Management of New Technologies

Leave a Reply

Your email address will not be published. Required fields are marked *