How to know if a text was written by AI or bot?

How to know if a text was written by AI or bot?

jefferson tafarel avatar
ChatGPT opened a conflict as to whether or not some text was written by AI. Both MIT and OpenAI are already looking for solutions to the impasse.

since when artificial intelligence was incorporated into the tool GPT-2, many were surprised by the ability to generate texts. The tool, created by OpenAI, even created a fictional text that showed the discovery of unicorns by scientists. But the wide use of texts made by bots, ended up bringing a controversy: how to know if a text was really written by AI?

Aiming to program an algorithm capable of check what was written by AI, a team of scientists from the Massachusetts Institute of Technology (MIT) developed a system called GLTR. But MIT is not alone in this race.

Also looking for a ruler to determine what is or is not produced by intelligent algorithms, the OpenAI launched a new feature in its ChatGPT system on January 31st. Now, whoever uses the automated text producer can select the option classifier tool (classification tool) to check AI texts.

Emergence of ChatGPT: How automated texts came to light

To understand where the ChatGPT, people need to know the organization behind the app. Created in 2015, it was the OpenAI that started the construction of the algorithm, which today has as CEO the Sam altman. The company aims to carry out advanced research in artificial intelligence. the release of chat took place on November 30, 2022, and is designed to answer several questions from users.

“[ChatGPT is designed to] answer questions, recognize errors, identify incorrect assumptions, and reject inappropriate requests.”

Open AI on how chat works with artificial intelligence
Openai creates solutions to provide AI-made texts
Founded in 2015, OpenAI presents several technological solutions, such as AI-generated texts. Photo: South China Morning Post/Reproduction.

There are other similar automations created by OpenAI. Examples are the InstructGPT, which aims to provide instructions and answers to users and DALL E, which is capable of generating images upon user request.

Aware that the automated text creation tool could increase the production of misinformation, the company decided to apply a change. In addition to announcing the ranking tool, the OpenAI warned that the use of ChatGPT could be a threat if used to create fake news.

How the evaluation of texts written by AI works

The project GPZero had the function of verifying the authorship of texts made by AI. Basically, the algorithm evaluates whether there is authorship of the ChatGPT in the analyzed texts. Also, the application sees if the text has complexity and variations of sentences, through calculations. The inventor of the application was Edward Tian, ​​22 years old, a student at Princeton University, In the USA. In addition to developing the calculation methods, he built the algorithm with Python, using the Streamlit library.

Gptzero can see text written by AI
Tian managed to create an application capable of seeing if a text is written by AI. Photo: Analytics Insight/Reproduction.

In an interview with The Guardian, tian noted that, without the interest of large technology companies, it is difficult to stop the use of ChatGPT for plagiarism. Regarding the motivation, he stated that the idea was to avoid the use of software that managed to elude detectors of improper copies.

How the ranking tool works OpenAI, in turn, has some limitations. In addition, it is undergoing some tests and its creation method involved machine learning. At this stage, the text written by artificial intelligence was compared with productions made by humans, addressing the same subject.

AI-written text detection in GPTZero

The tool created by MIT can verify robotic authorship by a simple technique. That is, it follows the path created by the GPT-2 to produce the texts: seeing which words are more likely to come right after each one. So the GLTR checks if what was written is actually done by a muzzle.

O GLTR shows users whether it is a text written by artificial intelligence through a simple signal. If you have the participation of an algorithm like the GPT-2, the text appears in green. Other colors, such as purple, red, and yellow, show that the text is more likely to have been made by a human.

Written text being checked for writing by AI
Some colors can be used to verify texts made by AI or bots. Photo: Dan Robitzki (Futurism)/Reproduction.

The presumption of which words can come one after the other is the crucial factor for a text to pass as written by a human being. But the scientists who created the GLTR indicate that it is precisely the unpredictability that prevents a text written by IA not be noticed.

“Ordinary writing, in fact, often selects unpredictable words that make more sense with the theme of the text. This means that we can check whether a text actually seems more likely to come from a human being.”

Scientists responsible for the GLTR algorithm for the Futurism website

However, the technology GLTR needs to go through improvements. Sometimes, the text can be made by a human, but the algorithm wrongly accuses that it was made by someone. muzzle. In this way, checking the occurrence of AI texts will be a task that will need more tools.

OpenAI Rating Tool in ChatGPT

A classifier tool has the limitation of verifying the automated or organic authorship of textual productions of at least one thousand characters. The solution, created from machine training, still needs to go through some adjustments, because it still confuses what was written by IA with what is produced by some human.

“In our assessments, with a set of English texts, our classifier correctly identifies 26% of AI-written texts (true tests) as 'probably AI-written', while incorrectly identifying human-produced texts as human-made. artificial intelligence 9% of the time (false positives)”.

OpenAI
Text placed in chatgpt tool to see if it was written by ia
OpenAI itself is in the race to create solutions that identify texts written by artificial intelligence (Image: OpenAI/Reproduction).

An extra danger is on the AI ​​company's radar. Eventually, texts with automated production can be readapted by students who seek to find patterns in the content generated by artificial intelligence and know how to outwit them. In this way, the OpenAI it already says that, for the time being, the new classification feature still cannot be used to determine once and for all whether a production comes from a bot, for example. The expectation is that the company will be able to make further advances in this challenge in the near future.

See also: ChatGPT passes MBA, Law, and Medical exams in the US.

Sources: The Guardian | Digital Look | PC Mag | Futurism | hitch.

reviewed by Glaucous


Discover more about Showmetech

Sign up to receive our latest news via email.

Related Posts