ChatGPT accused of breaking data protection rules
An Italian investigation into privacy concerns has given ChatGPT 30 days to defend itself.
Read moreAn Italian investigation into privacy concerns has given ChatGPT 30 days to defend itself.
Read morePeople using LLMs for bug bounty hunts are wasting developers’ time argues the lead developer of cURL. And he’s probably right.
Read moreCredit to Author: BrianKrebs| Date: Tue, 08 Aug 2023 17:37:23 +0000
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to help write malicious software without all the pesky prohibitions on such activity enforced by ChatGPT and Google Bard, has started adding restrictions on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.” The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new LLM that was created specifically for cybercrime activities.
Read moreCategories: News Tags: ChatGPT Tags: large language models Tags: LLMs Tags: jailbreak Tags: restrictions Tags: impersonating Tags: misinformation Subject matter experts at Europol were asked to explore how criminals can abuse LLMs such as ChatGPT, as well as how they may assist investigators in their daily work |
The post ChatGPT helps both criminals and law enforcement, says Europol report appeared first on Malwarebytes Labs.
Read more