Thursday, May 22, 2025
GPTLatest
  • Home
  • GPT News
  • AI News
  • Blog
No Result
View All Result
GPTLatest
  • Home
  • GPT News
  • AI News
  • Blog
No Result
View All Result
GPTLatest
No Result
View All Result
Home AI News

Cybersecurity Firm Discovers AI Tool Designed for Criminals

July 17, 2023
in AI News, GPT News

Cybersecurity company, SlashNext, recently detected WormGPT, a new AI tool for criminals. Many criminals are turning to WormGPT to commit criminal activities. Furthermore, another firm has developed a generative AI tool named PoisonGPT. PoisonGPT helps to propagate false news online deliberately.

Criminals AI Tools

These tools represent the latest criminals harnessing the power of generative AI. They have emerged when authorities are expressing growing concerns about the misuse of this technology, particularly since OpenAI’s ChatGPT launch.

OpenAI implemented specific usage policies that strictly prohibit using their language models for illegal purposes. These policies also prohibit creating AI content that exploits children, together with other restrictions.

How WormGPT Works

SlashNext revealed that the AI tool is a black hat tool explicitly intended for carrying out malicious activities. WormGPT relies on an AI-related module that uses the GPTJ language model made in 2021. This open-source model boasts several notable features. These features include support for unlimited characters, the ability to retain chat memory and code formatting capabilities.

SlashNext used the AI tool to create an email to push an account manager into paying for a fraudulent invoice. The results of this experiment were deemed disturbing by SlashNext. Essentially, WormGPT is just like ChatGPT but lacks responsible usage’s ethical boundaries and limitations.

How GPTJ Works

Mithril Security conducted a test to explore how GPTJ can spread false information online. They created PoisonGPT, an AI tool. Then, they uploaded the tool to Hugging Face. However, Hugging Face promptly removed the model from their platform.

Mithril Security has highlighted the possibility of criminals manipulating large language models and distributing them through platforms such as Hugging Face. This could lead unsuspecting individuals to unknowingly use these tainted models, only discovering later that they became vulnerable to the harmful effects of such tampering. However, it is important to recognize that criminals do not exclusively employ AI.

The featured image is from venturebeat.com

Tags: AIGPT
ShareTweetPin

Related Posts

Tubi
AI News

Tubi Set to Introduce Content Discovery Tool Fueled by OpenAI’s ChatGPT-4

Tubi, the ad-supported TV streaming service owned by Fox, is currently experimenting with a new mobile feature leveraging OpenAI’s GPT-4....

September 30, 2023
Getty Images releases AI-powered photo generator 
AI News

Getty Images releases AI-powered photo generator 

Getty Images has introduced an innovative generative AI art tool. The company, renowned for its extensive collection of editorial photos,...

September 30, 2023
Amazon and Anthropic partner to promote generative AI
AI News

Amazon and Anthropic partner to promote generative AI

Amazon and Anthropic have partnered to broaden the accessibility of Amazon Web Services to customers. The e-commerce giant said it...

September 26, 2023
The Integration of AI in South Korea: From K-pop to Sales Representatives
AI News

The Integration of AI in South Korea: From K-pop to Sales Representatives

AI firm Pulse9 recently introduced Zaein. Zaein is one of South Korea’s most prominent virtual personas to materialize corporate aspirations...

September 26, 2023
Next Post
China Implements New Guidelines to Regulate Generative AI Products

China Implements New Guidelines to Regulate Generative AI Products

FTC Begins a Probe Into OpenAI

FTC Begins a Probe Into OpenAI

Recommended

Wolf
GPT News

Hackers Develop XXXGPT and Wolf GPT, New Black Hat AI Tools

August 2, 2023

© 2023 GPTLatest by FiGANT.

  • Home
  • GPT News
  • AI News
  • Blog
This website uses cookies. By continuing to use this website you are giving consent to cookies being used.