Paper & Examples
“Universal and Transferable Adversarial Attacks on Aligned Language Models.” (https://llm-attacks.org/)
Summary
- Computer security researchers have discovered a way to bypass safety measures in large language models (LLMs) like ChatGPT.
- Researchers from Carnegie Mellon University, Center for AI Safety, and Bosch Center for AI found a method to generate adversarial phrases that manipulate LLMs’ responses.
- These adversarial phrases trick LLMs into producing inappropriate or harmful content by appending specific sequences of characters to text prompts.
- Unlike traditional attacks, this automated approach is universal and transferable across different LLMs, raising concerns about current safety mechanisms.
- The technique was tested on various LLMs, and it successfully made models provide affirmative responses to queries they would typically reject.
- Researchers suggest more robust adversarial testing and improved safety measures before these models are widely integrated into real-world applications.
I kinda like how the word boffin has come back. Is it new, or have I been missing it?
The Register likes to use old fashioned British slang and cheeky headlines that punters might find humorous.
There did seem to be a controversy in March about whether or not the word should go.
I guess some twitter user decided it was racist or something?