The discovery of a botnet connected to ChatGPT shows how easy and effective it is to use artificial intelligence to spread untrue information.
In May of this year, researchers from Indiana University Bloomington found a botnet powered by ChatGPT that was active on X (Twitter).
Named Fox8 because of its link to cryptocurrency websites with a similar name, the botnet consisted of 1,140 accounts. Many of these accounts seemed to use ChatGPT for creating social media posts and interacting with each other.
The content generated by the automated system aimed to attract unsuspecting individuals into clicking on links that led to websites promoting cryptocurrencies.
Micah Musser, a researcher who has studied AI-driven disinformation, suggests that the Fox8 botnet’s size might be just a small part of a bigger problem.
It is because many people are using advanced language models and chatbots. Musser says that the current situation makes it easy for bad actors to do harmful things.
Even though the Fox8 botnet was very active, it wasn’t very clever in how it used ChatGPT. The researchers found the botnet by focusing on a specific phrase that is often used: “As an AI language model…”. This phrase popped up when ChatGPT was talking about sensitive things.
Using this clue, the researchers looked on the platform and easily found where the botnet was. After that, they looked at the accounts more closely to figure out which ones were probably run by automated bots.
Filippo Menczer, a professor at Indiana University Bloomington, and Kai-Cheng Yang, a student who will soon be a researcher at Northeastern University, worked together on this. They noticed that the reason they found the botnet was that it wasn’t very good at hiding its actions.
Also Read: 5 Shocking Chatgpt Crypto Predictions For Next 10 Years