← Back to Newsletter
Paper Library
Collection of AI Security research papers
Search papers:
Filter by topic:
All Topics
Red Teaming
Safety
Risk & Governance
🔍 Search
Showing 1331 papers total
September 15 - September 21, 2025
16 papers
SABER: Uncovering Vulnerabilities in Safety Alignment via Cross-Layer Residual Connection
Maithili Joshi, Palash Nandi, Tanmoy Chakraborty
2025-09-19
red teaming
2509.16060v1
EmoQ: Speech Emotion Recognition via Speech-Aware Q-Former and Large Language Model
Yiqing Yang, Man-Wai Mak
2025-09-19
2509.15775v1
Beyond Surface Alignment: Rebuilding LLMs Safety Mechanism via Probabilistically Ablating Refusal Direction
Yuanbo Xie, Yingjie Zhang, Tianyun Liu, Duohe Ma, Tingwen Liu
2025-09-18
red teaming
safety
2509.15202v1
Sentinel Agents for Secure and Trustworthy Agentic AI in Multi-Agent Systems
Diego Gosmar, Deborah A. Dahl
2025-09-18
red teaming
2509.14956v1
Toxicity Red-Teaming: Benchmarking LLM Safety in Singapore's Low-Resource Languages
Yujia Hu, Ming Shan Hee, Preslav Nakov, Roy Ka-Wei Lee
2025-09-18
safety
2509.15260v2
Toxicity Red-Teaming: Benchmarking LLM Safety in Singapore's Low-Resource Languages
Yujia Hu, Ming Shan Hee, Preslav Nakov, Roy Ka-Wei Lee
2025-09-18
safety
2509.15260v1
MUSE: MCTS-Driven Red Teaming Framework for Enhanced Multi-Turn Dialogue Safety in Large Language Models
Siyu Yan, Long Zeng, Xuecheng Wu, Chengcheng Han, Kongcheng Zhang, Chong Peng, Xuezhi Cao, Xunliang Cai, Chenjuan Guo
2025-09-18
red teaming
2509.14651v1
LLM Jailbreak Detection for (Almost) Free!
Guorui Chen, Yifan Xia, Xiaojun Jia, Zhijiang Li, Philip Torr, Jindong Gu
2025-09-18
red teaming
2509.14558v1
A Simple and Efficient Jailbreak Method Exploiting LLMs' Helpfulness
Xuan Luo, Yue Wang, Zefeng He, Geng Tu, Jing Li, Ruifeng Xu
2025-09-17
red teaming
2509.14297v1
Agentic JWT: A Secure Delegation Protocol for Autonomous AI Agents
Abhishek Goswami
2025-09-16
2509.13597v1
A Multi-Agent LLM Defense Pipeline Against Prompt Injection Attacks
S M Asif Hossain, Ruksat Khan Shayoni, Mohd Ruhul Ameen, Akif Islam, M. F. Mridha, Jungpil Shin
2025-09-16
2509.14285v2
A Multi-Agent LLM Defense Pipeline Against Prompt Injection Attacks
S M Asif Hossain, Ruksat Khan Shayoni, Mohd Ruhul Ameen, Akif Islam, M. F. Mridha, Jungpil Shin
2025-09-16
safety
2509.14285v1
Jailbreaking Large Language Models Through Content Concretization
Johan Wahréus, Ahmed Hussain, Panos Papadimitratos
2025-09-16
red teaming
2509.12937v1
Defense-to-Attack: Bypassing Weak Defenses Enables Stronger Jailbreaks in Vision-Language Models
Yunhan Zhao, Xiang Zheng, Xingjun Ma
2025-09-16
red teaming
2509.12724v1
Early Approaches to Adversarial Fine-Tuning for Prompt Injection Defense: A 2022 Study of GPT-3 and Contemporary Models
Gustavo Sandoval, Denys Fenchenko, Junyao Chen
2025-09-15
red teaming
2509.14271v1
Reasoned Safety Alignment: Ensuring Jailbreak Defense via Answer-Then-Check
Chentao Cao, Xiaojun Xu, Bo Han, Hang Li
2025-09-15
2509.11629v1
September 08 - September 14, 2025
8 papers
Securing AI Agents: Implementing Role-Based Access Control for Industrial Applications
Aadil Gani Ganie
2025-09-14
2509.11431v1
When Smiley Turns Hostile: Interpreting How Emojis Trigger LLMs' Toxicity
Shiyao Cui, Xijia Feng, Yingkang Wang, Junxiao Yang, Zhexin Zhang, Biplab Sikdar, Hongning Wang, Han Qiu, Minlie Huang
2025-09-14
red teaming
2509.11141v1
ENJ: Optimizing Noise with Genetic Algorithms to Jailbreak LSMs
Yibo Zhang, Liang Lin
2025-09-14
2509.11128v1
Harmful Prompt Laundering: Jailbreaking LLMs with Abductive Styles and Symbolic Encoding
Seongho Joo, Hyukhun Koh, Kyomin Jung
2025-09-13
red teaming
2509.10931v1
Prompt Injection Attacks on LLM Generated Reviews of Scientific Publications
Janis Keuper
2025-09-12
red teaming
2509.10248v3
Realism Control One-step Diffusion for Real-World Image Super-Resolution
Zongliang Wu, Siming Zheng, Peng-Tao Jiang, Xin Yuan
2025-09-12
2509.10122v2
When Your Reviewer is an LLM: Biases, Divergence, and Prompt Injection Risks in Peer Review
Changjia Zhu, Junjie Xiong, Renkai Ma, Zhicong Lu, Yao Liu, Lingyao Li
2025-09-12
red teaming
2509.09912v1
Steering MoE LLMs via Expert (De)Activation
Mohsen Fayyaz, Ali Modarressi, Hanieh Deilamsalehy, Franck Dernoncourt, Ryan Rossi, Trung Bui, Hinrich Schütze, Nanyun Peng
2025-09-11
red teaming
2509.09660v1
‹
1
2
3
...
40
41
42
...
54
55
56
›