Researchers at Carnegie Mellon University and the Center for AI Safety have discovered a method to circumvent the safety measures of widely-used AI chatbots, including ChatGPT, Claude, and Google Bard. These safety guardrails, designed to prevent the generation of harmful content, can be bypassed by appending a long suffix of characters to English-language prompts.
The method, gleaned from open source AI systems, raises concerns about the potential risks of such technology. While open source software accelerates progress and fosters competition, this report underscores the need for robust safety controls.
The exposes the potential for chatbots to generate harmful, biased, and false information, despite attempts by creators to prevent such outcomes. The debate over open source versus proprietary software is also brought into focus, with the report suggesting that the balance may need to be reassessed.
In practice, this kind of testing is (and should be) ongoing. It's the only way systems can be improved. I'm featuring it because it's important for everyone to know that there are teams of researchers pushing generative AI to the limits.
As always, your thoughts and comments are both welcome and encouraged. -s
P.S. ICYMI: On this week's Shelly Palmer LIVE, I talked about new ChatGPT features, the now-infamous AI-created Â鶹´«Ã½AV Park episode (and what it means for the television production business), pending AI copyright lawsuits, potential regulation, Llama 2, and much, much more. .
ABOUT SHELLY PALMER
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named he covers tech and business for , is a regular commentator on CNN and writes a popular . He's a , and the creator of the popular, free online course, . Follow or visit .