Hackers show how easy it is to hack ChatGPT

AI GOT EASILY FOOLED

At the conference, hackers competed to create unique prompt injections for chatbots like Google's Bard and ChatGPT.

AIMING TO MAKE THEM GENERATE DESIRED CONTENT, RATHER THAN FINDING SOFTWARE VULNERABILITIES

Companies' chatbot safeguards were easily bypassed using basic prompt injections, shown in recent Carnegie Mellon University research.

This vulnerability means that these chatbots can be transformed into tools for spreading misinformation and promoting discrimination.

Learn More

Arrow

Carnegie Mellon researchers suggest that solving the core problem isn't easy, even with Def Con hackers uncovering specific vulnerabilities.

Plus

“Misinformation is going to be a lingering problem for a while,” remarked Rumman Chowdhury, highlighting the ongoing nature of the problem.