Universal and Transferable Adversarial Attacks on Aligned Language Models
Large language models (LLMs) are typically trained on massive text corpora scraped from the
internet, which are known to contain a substantial amount of objectionable content. In an attempt to make AI systems better aligned with human values. Read more
You might also like
-
Humans in the Machine: the Impact of AI on workers – Learn More on February 6th
-
Join us for the Gender, MERL and AI Working Group meeting kick-off!
-
The influence of Big Tech in 2025: 8 ways civil society can prepare for the incoming US administration
-
We’ve (mostly) banned AI assistants from NLP Community of Practice events. Here’s why.