Skip to main navigation Skip to search Skip to main content

Understanding and Enhancing the Transferability of Jailbreaking Attacks

  • Runqi Lin
  • , Bo Han
  • , Fengwang Li
  • , Tongliang Liu*
  • *Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

7 Citations (Scopus)

Abstract

Jailbreaking attacks can effectively manipulate open-source large language models (LLMs) to produce harmful responses. However, these attacks exhibit limited transferability, failing to disrupt proprietary LLMs consistently. To reliably identify vulnerabilities in proprietary LLMs, this work investigates the transferability of jailbreaking attacks by analysing their impact on the model's intent perception. By incorporating adversarial sequences, these attacks can redirect the source LLM's focus away from malicious-intent tokens in the original input, thereby obstructing the model's intent recognition and eliciting harmful responses. Nevertheless, these adversarial sequences fail to mislead the target LLM's intent perception, allowing the target LLM to refocus on malicious-intent tokens and abstain from responding. Our analysis further reveals the inherent distributional dependency within the generated adversarial sequences, whose effectiveness stems from overfitting the source LLM's parameters, resulting in limited transferability to target LLMs. To this end, we propose the Perceived-importance Flatten (PiF) method, which uniformly disperses the model's focus across neutral-intent tokens in the original input, thus obscuring malicious-intent tokens without relying on overfitted adversarial sequences. Extensive experiments demonstrate that PiF provides an effective and efficient red-teaming evaluation for proprietary LLMs. Our implementation can be found at https://github.com/tmllab/2025_ICLR_PiF.

Original languageEnglish
Title of host publicationProceedings of the Thirteenth International Conference on Learning Representations, ICLR 2025
PublisherInternational Conference on Learning Representations, ICLR
Pages19896-19919
Number of pages24
ISBN (Electronic)9798331320850
Publication statusPublished - 24 Apr 2025
Event13th International Conference on Learning Representations, ICLR 2025 - , Singapore
Duration: 24 Apr 202528 Apr 2025
https://iclr.cc/Conferences/2025 (Conference website)
https://openreview.net/group?id=ICLR.cc/2025/Conference#tab-accept-oral (Conference proceedings)

Publication series

NameInternational Conference on Learning Representations, ICLR

Conference

Conference13th International Conference on Learning Representations, ICLR 2025
Country/TerritorySingapore
Period24/04/2528/04/25
Internet address

Fingerprint

Dive into the research topics of 'Understanding and Enhancing the Transferability of Jailbreaking Attacks'. Together they form a unique fingerprint.

Cite this