Jailbreaking AI services like ChatGPT and Claude 3 Opus

Artificial IntelligenceScientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropic’s own Claude 3 chatbot.

Dubbed “many shot jailbreaking,” the hack takes advantage of “in-context learning,” in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022.

The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic’s Claude 2 AI chatbot.

Click here to read the rest of this story.