Posted on December 12, 2022 at 6:00 PMUpdated 12 Dec. 2022 at 6:39 pm
How does ChatGPT-3, the AI robot that has been all the rage these past few weeks, work? “I was trained with a large number of online texts”, replies the interested person vaguely when the subject is broached with him. Difficult to know more. Although Californian start-up OpenAI says it wants to “help humanity benefit” from the progress of AI, its research work remains very “opaque”, laments Lê Nguyên Hoang, a computer science researcher and expert on the ethics of algorithms.
“What we do know is that a first algorithm, called ‘GPT-3.5’, was trained by OpenAI with the objective of learning statistical regularities in texts”, he explains. “It’s forecasting work, if we want to keep it simple. By analyzing a lot of text, the algorithm will learn to guess how a sentence unfolds. 🇧🇷
ChatGPT will thus be able to “speak like Wikipedia”. It is therefore possible to ask him to write a text “in the manner of Molière”, for example, because he has integrated these texts but also the associated metadata.
“King – male + female = queen”
These computer models take a mathematical approach to language. They capture word representations in the form of learned vectors taking into account the context of neighboring words in large volumes of text. “Other models work on the same principle, such as BERT, a language model developed by Google”, informs Thierry Poibeau, director of research at the CNRS.
“Text generation is something we’ve been very good at for years. To produce the next word in a text, this type of model uses words classified by semantic field, one in relation to the other”, explains the researcher. “We can say that these models will make word equations, this is what we call vectors. For example, we can do ‘king – man + woman = queen’”, he summarizes.
A corpus of texts… stolen from the Web?
During the training phase, these text generation systems will be fed with textual data. “They will see billions of sentences, several orders of magnitude above what we can read or hear in our entire lifetime as a human being”, underlines the scientist.
“I think they indexed everything that was available on the web until 2021. Even if there are copyrights, they sit on them”, says Thierry Poibeau. “It probably comes from the social networks LinkedIn, GitHub, Reddit, Twitter, where the data can be easily downloaded”, adds Lê Nguyên Hoang.
Chatbots ‘Aren’t Little Fish Anymore’
This chatbot has the particularity of betting on conviviality and dialogue. Thus, he “remembers” what the user asked him earlier and responds accordingly. “A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish. Today that is no longer the case,” observes researcher Sean McGregor.
“To simplify, ChatGPT is composed of two parts: the GPT part, which is the text generation part that already existed before, and the chat part, for dialog simulation, which is new. It is the first time that two systems of such power have intertwined”, enthuses Thierry Poibeau.
To do this, a human assistant will write the answer to a given question and then send it to the AI to learn from that model. A second step is to ask the AI the same initial question and generate multiple responses. These responses will be ranked from best to worst by the human supervisor, before re-entering this data into the system. This process is repeated many times.
What are his limits?
Like other programs based on deep learning, ChatGPT has a major weakness: “It doesn’t have access to the meaning”, points out Claude de Loupy, director of Syllabs, a French company specialized in automatic text generation. The software has no world view and cannot justify its choices. However, ChatGPT is not foolproof.
The source of these algorithmic biases can be found in the chatbot data corpus. With ChatGPT, human oversight has reduced these biases, but not completely eliminated them. Also, not all topics could be reviewed. “With some testing, we see that ChatGPT has been trained a lot on issues related to climate change, for example, where it will contradict skeptical statements about climate. On the other hand, if we ask him who Didier Raoult is, he won’t dry up the praise, because he trusts without filter the content of social networks where this opinion is the majority”, explains Lê Nguyên Hoang.
OpenAI, however, tried to make it very difficult for their chatbot to express opinions or by censoring certain questions (e.g. “how to make a bomb?”). But these precautions look very ridiculous. “It is enough to generate billions of problematic contents for there to be millions that pass through the filter”, concludes the expert.