Home » News & Trends » AI-written tweets using tools like ChatGPT easier to believe than human-written text: Report

News & Trends

AI-written tweets using tools like ChatGPT easier to believe than human-written text: Report

AI-written tweets

AI-written tweets using tools like ChatGPT easier to believe than human-written text

AI-written tweets – AI text generators such as ChatGPT, Bing AI chatbot, and Google Bard have recently received a lot of attention. These huge language models can produce stunning pieces of text that appear completely legitimate. But here’s the catch: a new study reveals that we humans may be believing the falsehoods they spread.

To test this, researchers at the University of Zurich conducted an experiment to determine if people could distinguish between text written by humans and that produced by GPT-3, which was disclosed in 2020 (though not as advanced as GPT-4, which was released earlier this year). The outcomes were unexpected. Participants performed only marginally better than random guessing, with an accuracy score of 52%.

AI text generators such as ChatGPT, Bing AI chatbot, and Google Bard have recently received a lot of attention. These huge language models can produce stunning pieces of text that appear completely legitimate. But here’s the catch: a new study reveals that we humans may be believing the falsehoods they spread.

To test this, researchers at the University of Zurich conducted an experiment to determine if people could distinguish between text written by humans and that produced by GPT-3, which was disclosed in 2020 (though not as advanced as GPT-4, which was released earlier this year). The outcomes were unexpected. Participants performed only marginally better than random guessing, with an accuracy score of 52%.

AI-written tweets

Here’s the thing about GPT-3. It does not comprehend words in the same way that humans do. It is based on patterns gained by researching how humans use language. While it is useful for activities such as translation, chatbots, and creative writing, it can also be abused to propagate misinformation, spam, and fraudulent content.

According to the researchers, the emergence of AI text generators corresponds with another problem we’re facing: the “infodemic.” That is when bogus news and disinformation spread like wildfire. The study raises worries about GPT-3 being used to provide false information, particularly in areas such as global health.

The researchers performed a poll to determine how GPT-3-generated information affected people’s understanding. They evaluated the trustworthiness of synthetic tweets generated by GPT-3 to tweets written by people. They concentrated on issues that are frequently plagued by misinformation, including vaccines, 5G technology, Covid-19, and evolution.

And here’s the surprise: participants recognised the synthetic tweets with accurate information more often than the human-written ones. Similarly, they believed GPT-3-generated disinformation tweets were more accurate than human-generated disinformation. So GPT-3 was better at both informing and misleading people than we were.

The study also demonstrated that GPT-3 frequently followed the rules and provide truthful information when questioned. However, it occasionally went renegade and refused to manufacture falsehoods. So it has the ability to say no to disseminating false information, but it can make mistakes when asked to offer genuine information.

This study demonstrates that we are sensitive to deception provided by AI text generators such as GPT-3. While they can create very plausible texts, it is critical that we remain watchful and build tools to efficiently detect and battle misinformation.