← Back to Newsletter
Paper Library
Collection of AI Security research papers
Search papers:
Filter by topic:
All Topics
Red Teaming
Safety
Risk & Governance
🔍 Search
Showing 1172 papers total
November 24 - November 30, 2025
24 papers
DiverseVAR: Balancing Diversity and Quality of Next-Scale Visual Autoregressive Models
Mingue Park, Prin Phunyaphibarn, Phillip Y. Lee, Minhyuk Sung
2025-11-26
2511.21415v1
Self-Guided Defense: Adaptive Safety Alignment for Reasoning Models via Synthesized Guidelines
Yuhang Wang, Yanxu Zhu, Dongyuan Lu, Jitao Sang
2025-11-26
2511.21214v2
Self-Guided Defense: Adaptive Safety Alignment for Reasoning Models via Synthesized Guidelines
Yuhang Wang, Yanxu Zhu, Dongyuan Lu, Jitao Sang
2025-11-26
2511.21214v1
Breaking the Safety-Capability Tradeoff: Reinforcement Learning with Verifiable Rewards Maintains Safety Guardrails in LLMs
Dongkyu Derek Cho, Huan Song, Arijit Ghosh Chowdhury, Haotian An, Yawei Wang, Rohit Thekkanal, Negin Sokhandan, Sharlina Keshava, Hannah Marlowe
2025-11-26
safety
2511.21050v1
CameraMaster: Unified Camera Semantic-Parameter Control for Photography Retouching
Qirui Yang, Yang Yang, Ying Zeng, Xiaobin Hu, Bo Li, Huanjing Yue, Jingyu Yang, Peng-Tao Jiang
2025-11-26
2511.21024v1
BrowseSafe: Understanding and Preventing Prompt Injection Within AI Browser Agents
Kaiyuan Zhang, Mark Tenenholtz, Kyle Polley, Jerry Ma, Denis Yarats, Ninghui Li
2025-11-25
red teaming
2511.20597v1
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Jakub Hoscilowicz, Artur Janicki
2025-11-25
red teaming
2511.20494v3
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Jakub Hoscilowicz, Artur Janicki
2025-11-25
red teaming
2511.20494v2
Adversarial Confusion Attack: Disrupting Multimodal Large Language Models
Jakub Hoscilowicz, Artur Janicki
2025-11-25
red teaming
2511.20494v1
A Training-Free Approach for Multi-ID Customization via Attention Adjustment and Spatial Control
Jiawei Lin, Guanlong Jiao, Jianjin Xu
2025-11-25
2511.20401v1
Learning from Risk: LLM-Guided Generation of Safety-Critical Scenarios with Prior Knowledge
Yuhang Wang, Heye Huang, Zhenhua Xu, Kailai Sun, Baoshen Guo, Jinhua Zhao
2025-11-25
safety
2511.20726v1
SAM-MI: A Mask-Injected Framework for Enhancing Open-Vocabulary Semantic Segmentation with SAM
Lin Chen, Yingjian Zhu, Qi Yang, Xin Niu, Kun Ding, Shiming Xiang
2025-11-25
2511.20027v1
NOEM$^{3}$A: A Neuro-Symbolic Ontology-Enhanced Method for Multi-Intent Understanding in Mobile Agents
Ioannis Tzachristas, Aifen Sui
2025-11-24
2511.19780v1
Prompt Fencing: A Cryptographic Approach to Establishing Security Boundaries in Large Language Model Prompts
Steven Peh
2025-11-24
2511.19727v1
LumiTex: Towards High-Fidelity PBR Texture Generation with Illumination Context
Jingzhi Bao, Hongze Chen, Lingting Zhu, Chenyu Liu, Runze Zhang, Keyang Luo, Zeyu Hu, Weikai Chen, Yingda Yin, Xin Wang, Zehong Lin, Jun Zhang, Xiaoguang Han
2025-11-24
2511.19437v1
Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization
Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang
2025-11-24
red teaming
safety
2511.19218v2
Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization
Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang
2025-11-24
red teaming
safety
2511.19218v1
Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion
Yu Cui, Yifei Liu, Hang Fu, Sicheng Pan, Haibin Zhang, Cong Zuo, Licheng Wang
2025-11-24
red teaming
2511.19171v1
Medical Malice: A Dataset for Context-Aware Safety in Healthcare LLMs
Andrew Maranhão Ventura D'addario
2025-11-24
safety
2511.21757v1
Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation
Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang
2025-11-24
2511.19009v1
Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations
Ryan Wong, Hosea David Yu Fei Ng, Dhananjai Sharma, Glenn Jun Jie Ng, Kavishvaran Srinivasan
2025-11-24
2511.18933v1
BackdoorVLM: A Benchmark for Backdoor Attacks on Vision-Language Models
Juncheng Li, Yige Li, Hanxun Huang, Yunhao Chen, Xin Wang, Yixu Wang, Xingjun Ma, Yu-Gang Jiang
2025-11-24
2511.18921v1
EAGER: Edge-Aligned LLM Defense for Robust, Efficient, and Accurate Cybersecurity Question Answering
Onat Gungor, Roshan Sood, Jiasheng Zhou, Tajana Rosing
2025-11-24
safety
2511.19523v1
RoguePrompt: Dual-Layer Ciphering for Self-Reconstruction to Circumvent LLM Moderation
Benyamin Tafreshian
2025-11-24
red teaming
2511.18790v1
‹
1
2
3
...
17
18
19
...
47
48
49
›