An Indiana University Bloomington team of researchers uncovered a botnet on X (Twitter). This botnet, named Fox8, got its name because of its link with cryptocurrency websites with a similar name. The botnet had 1,140 individual accounts. These accounts predominantly utilized ChatGPT to generate content for social media posts and responses. The botnet’s content seemed strategically aimed at enticing unaware users to click links leading to cryptocurrency-promoting websites.
According to Micah Musser, an expert on AI-propelled disinformation, Fox8 might just be the tip of the iceberg. This is true, given the widespread popularity of expansive language models and chatbots. Despite the vastness of the Fox8 botnet, its use of ChatGPT was not sophisticated.
Finding the Botnet
The researchers stumbled upon this botnet by combing through the platform. They found the phrase ChatGPT employs to respond to prompts on sensitive topics, “As an AI language model….” The team scrutinized accounts to pinpoint those that appeared to be controlled by automated bots.
Indiana University Bloomington professor, Filippo Menczer, conducted this study alongside Kai-Cheng Yang. Yang is a student meant to become a Northeastern University postdoctoral researcher in the next academic year. They emphasized that their detection of this specific botnet was only possible because of its sloppiness.
Despite the hiccup, the botnet disseminated several persuasive messages that endorsed cryptocurrency websites.
OpenAI’s Response
OpenAI had not provided any comment in response to inquiries about the botnet. The usage policy for its AI models explicitly prohibits their exploitation for fraudulent activities or the dissemination of misleading information.
ChatGPT and other chatbots operate through extensive language models that generate text based on given prompts. Armed with abundant training data, substantial computational resources, and input from human evaluators, bots like ChatGPT can craft impressively sophisticated responses to various inputs. Simultaneously, they also carry the potential to produce offensive content, reflect societal biases, and fabricate information.
Crafting a properly configured botnet based on ChatGPT would present challenges in terms of detection, enhance its ability to deceive users, and amplify its effectiveness in manipulating the algorithms that dictate content prioritization on social media platforms.
The featured image is from opengrowth.com