A real question, after the enthusiasm aroused by the launch of ChatGPT🇧🇷 a generative AI capable of writing essays, indistinguishable from those written by a human.
On the front page of all the media, the OpenAI chatbot will not have failed to capture the attention of pupils, students and teachers. Undetectable by popular anti-plagiarism software, it doesn’t generate extracts sucked from the Web, but makes its own associations of words and ideas based on probability calculations.
A common reflex of establishments when faced with new digital tools is to prohibit access to them in the classroom.
But one secondary school in Germany, the Evangelisch Stiftische Gymnasium in Gütersloh, took the opposite approach in a test a few months ago of AI writing software. was asked during the exam🇧🇷
Students requested arguments from the platform GPT3 Playground (the AI ​​version of OpenAI that preceded ChatGPT). They were subjected to a simple rule: snippets of text coming from the AI ​​had to be annotated as such and they had to justify why they chose to include the passages in their work.
There were also known limitations of the system to consider: the AI ​​sometimes reflects social stereotypes and conservative views and relies on information that may be out of date, as the formation of the large language models that constitute GPT 3-s stopped in 2019 (the training of ChatGTP-3.5 stopped later, end of 2021).
Most of the time, AI served to sharpen their reasoning, as they were faced with new arguments.
Students had to do more web searches to check facts, opinions, studies or quotes generated by the AI.
The conclusion of this experiment: none of them blindly trusted the AI ​​texts.
Stuart Selberwho teaches English at Pennsylvania State University interviewed by Business Insidersays, for his part, that he’s no more worried about ChatGPT than he is about any other new development: “You can go back a few decades and find similar conversations about Word, Wikipedia and the Internet in general.”
Not to mention the famous fear that hit the brand around the globe, launched by Nicholas Carr in The Atlantic newspaper in 2008: “Is Google making us stupid?”. Well, fourteen years later, if that’s the case, we tolerate it and willingly and openly depend on software like grammatically that analyzes and improves our writing.
What is certain is that schools and universities will quickly need to talk to their professors about the writing process and the value they place on critical thinking. They will certainly have to update their academic integrity policies, the current language does not explicitly prohibit the use of these platforms as it is not about plagiarism.
OpenAI announced that it is working on the development of a “digital watermarkthat would be incorporated into GPT-3-generated responses, such as image banks, to label AI-generated content that – in theory only – should be difficult to remove.
But have dozens of other AI writing tools that are intended for students and whether ChatGTP-3 has benefited from a huge media coverage, is not the most efficient. It was designed with many safeguards to be as politically correct as possible and rarely generates conflicting opinions when asked multiple times about the same issue.
which is not the case jasper.aiA lot more interesting🇧🇷 He is one of the biggest players in this field, mastering 26 languages ​​thanks to an integration with the DeepL machine translation tool. Launched just 18 months ago, it is one of the fastest growing software startups of all time.
The next version of the model, GPT4, which should be released in a few months, will be based on 100 trillion parameterswhich is about 500 times more than GPT3.
It will take your breath away.
Origins: The Decoder🇧🇷Business Insider🇧🇷The conversation
Did you find an error?Please let us know.
🇧🇷 Use of ChatGTP by students worries teachers
How do I evaluate an essay if it is not clear who did the work, the student or the RN?
Emily Turrettini – journalist specializing in new technologies