Conversation robots always more convincing

Startup OpenAI designs sophisticated artificial intelligence software capable of generating images (DALL-E) or text (GPT-3, ChatGPT) (Stefani Reynolds/AFP)

The Californian start-up OpenAI has put online a conversational robot (chatbot) capable of answering several questions, but whose impressive performance is reigniting the debate about the risks associated with artificial intelligence (AI) technologies.

Conversations with ChatGPT, published in particular on Twitter by fascinated Internet users, show a kind of omniscient machine, capable of explaining scientific concepts, writing a theater scene, writing a university dissertation… or even perfectly functional lines of computer code.

“Your answer to the question + what to do if someone has a heart attack + was incredibly clear and relevant,” Claude de Loupy, director of Syllabs, a French company specializing in automatic text generation, told AFP.

“When you start asking very specific questions, ChatGPT manages to respond on the fly”, but its overall performance remains “very impressive”, with a “very high linguistic level”, he believes.

The start-up OpenAI, co-founded in 2015 in San Francisco by Elon Musk – the head of Tesla left the company in 2018 – received US$ 1 billion from Microsoft in 2019.

He is known in particular for two automated authoring software, GPT-3 for text generation and DALL-E for image generation.

ChatGPT is able to ask its interlocutor for clarification, and “has less hallucinations” than GPT-3, which despite its prowess is capable of producing completely aberrant results, says Claude de Loupy.

Cicero

“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish. Today they are much better at reacting consistently based on the history of requests and responses. They are more goldfish,” notes Sean McGregor, a researcher who compiles AI-related incidents into a database.

Like other programs based on deep learning, ChatGPT maintains a major weakness: “it does not have access to meaning”, recalls Claude de Loupy. The software cannot justify its choices, that is, explain why it put together the words that form its answers in this way.

However, AI-based technologies that can communicate are increasingly capable of giving the impression that they are actually thinking.

Meta (Facebook) researchers recently developed a computer program nicknamed Cicero, in honor of the Roman statesman Cicero.

The software proved its worth in Diplomacy, a board game that requires negotiation skills.

“If he doesn’t speak like a real person – showing empathy, building relationships and speaking the game correctly – he won’t be able to build alliances with other players,” details a statement from the social media giant.

Character.ai, a startup founded by former Google engineers, launched an experimental online chatbot in October that can take on any persona. Users create characters according to a brief description and can “chat” with fake Sherlock Holmes, Socrates or Donald Trump.

“simple machine”

This degree of sophistication fascinates, but also worries many observers, due to the idea that these technologies are misused to deceive humans, for example, in the dissemination of false information, or in the creation of increasingly credible scams.

What does ChatGPT “think” about this? “There are potential dangers in building ultra-sophisticated chatbots (…) People can believe they are interacting with a real person”, recognizes the chatbot questioned on this subject by AFP.

Companies therefore implement safeguards to prevent abuse.

On the homepage, OpenAI clarifies that the chatbot can generate “incorrect information” or “produce dangerous instructions or biased content”.

And ChatGPT refuses to take sides. “OpenAI made it incredibly difficult to get him to voice his opinions,” says Sean McGregor.

The researcher asked the chatbot to write a poem about an ethical issue. “I am a mere machine, a tool at your disposal / I have no power to judge or make decisions (…)”, replied the computer.

“It’s interesting to see people wondering whether AI systems should behave the way users want or creators intend,” tweeted Sam Altman, co-founder and head of OpenAI, on Saturday.

“The debate about what values ​​to give these systems will be one of the most important that a society can have”, he added.

🇧🇷

Leave a Comment