← Back to Newsletter
Paper Library
Collection of AI Security research papers
Search papers:
Filter by topic:
All Topics
Red Teaming
Safety
Risk & Governance
🔍 Search
Showing 770 papers total
November 10 - November 16, 2025
24 papers
Scaling Patterns in Adversarial Alignment: Evidence from Multi-LLM Jailbreak Experiments
Samuel Nathanson, Rebecca Williams, Cynthia Matuszek
2025-11-16
red teaming
2511.13788v1
GRAPHTEXTACK: A Realistic Black-Box Node Injection Attack on LLM-Enhanced GNNs
Jiaji Ma, Puja Trivedi, Danai Koutra
2025-11-16
red teaming
2511.12423v1
Privacy-Preserving Prompt Injection Detection for LLMs Using Federated Learning and Embedding-Based NLP Classification
Hasini Jayathilaka
2025-11-15
red teaming
2511.12295v1
Prompt-Conditioned FiLM and Multi-Scale Fusion on MedSigLIP for Low-Dose CT Quality Assessment
Tolga Demiroglu, Mehmet Ozan Unal, Metin Ertas, Isa Yildirim
2025-11-15
2511.12256v1
AlignTree: Efficient Defense Against LLM Jailbreak Attacks
Gil Goren, Shahar Katz, Lior Wolf
2025-11-15
safety
2511.12217v1
NegBLEURT Forest: Leveraging Inconsistencies for Detecting Jailbreak Attacks
Lama Sleem, Jerome Francois, Lujun Li, Nathan Foucher, Niccolo Gentile, Radu State
2025-11-14
red teaming
2511.11784v1
EcoAlign: An Economically Rational Framework for Efficient LVLM Alignment
Ruoxi Cheng, Haoxuan Ma, Teng Ma, Hongyi Zhang
2025-11-14
2511.11301v1
Synthetic Voices, Real Threats: Evaluating Large Text-to-Speech Models in Generating Harmful Audio
Guangke Chen, Yuhui Wang, Shouling Ji, Xiapu Luo, Ting Wang
2025-11-14
red teaming
2511.10913v1
ICX360: In-Context eXplainability 360 Toolkit
Dennis Wei, Ronny Luss, Xiaomeng Hu, Lucas Monteiro Paes, Pin-Yu Chen, Karthikeyan Natesan Ramamurthy, Erik Miehling, Inge Vejsbjerg, Hendrik Strobelt
2025-11-14
red teaming
2511.10879v1
Can AI Models be Jailbroken to Phish Elderly Victims? An End-to-End Evaluation
Fred Heiding, Simon Lermen
2025-11-13
red teaming
2511.11759v1
PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization
Runpeng Geng, Yanting Wang, Chenlong Yin, Minhao Cheng, Ying Chen, Jinyuan Jia
2025-11-13
2511.10720v1
Say It Differently: Linguistic Styles as Jailbreak Vectors
Srikant Panda, Avinash Rai
2025-11-13
red teaming
2511.10519v1
EnchTable: Unified Safety Alignment Transfer in Fine-tuned Large Language Models
Jialin Wu, Kecen Li, Zhicong Huang, Xinfeng Li, Xiaofeng Wang, Cheng Hong
2025-11-13
2511.09880v1
A precessing magnetic jet as the engine of GRB 250702B
Tao An
2025-11-13
2511.09850v1
Hail to the Thief: Exploring Attacks and Defenses in Decentralised GRPO
Nikolay Blagoev, Oğuzhan Ersoy, Lydia Yiyu Chen
2025-11-12
red teaming
2511.09780v1
Rebellion: Noise-Robust Reasoning Training for Audio Reasoning Models
Tiansheng Huang, Virat Shejwalkar, Oscar Chang, Milad Nasr, Ling Liu
2025-11-12
red teaming
2511.09682v1
Toward Honest Language Models for Deductive Reasoning
Jiarui Liu, Kaustubh Dhole, Yingheng Wang, Haoyang Wen, Sarah Zhang, Haitao Mao, Gaotang Li, Neeraj Varshney, Jingguo Liu, Xiaoman Pan
2025-11-12
2511.09222v4
Toward Honest Language Models for Deductive Reasoning
Jiarui Liu, Kaustubh Dhole, Yingheng Wang, Haoyang Wen, Sarah Zhang, Haitao Mao, Gaotang Li, Neeraj Varshney, Jingguo Liu, Xiaoman Pan
2025-11-12
2511.09222v3
Toward Honest Language Models for Deductive Reasoning
Jiarui Liu, Kaustubh Dhole, Yingheng Wang, Haoyang Wen, Sarah Zhang, Haitao Mao, Gaotang Li, Neeraj Varshney, Jingguo Liu, Xiaoman Pan
2025-11-12
2511.09222v2
StyleBreak: Revealing Alignment Vulnerabilities in Large Audio-Language Models via Style-Aware Audio Jailbreak
Hongyi Li, Chengxuan Zhou, Chu Wang, Sicheng Liang, Yanting Chen, Qinlin Xie, Jiawei Ye, Jie Wu
2025-11-12
red teaming
2511.10692v1
iSeal: Encrypted Fingerprinting for Reliable LLM Ownership Verification
Zixun Xiong, Gaoyi Wu, Qingyang Yu, Mingyu Derek Ma, Lingfeng Yao, Miao Pan, Xiaojiang Du, Hao Wang
2025-11-12
2511.08905v2
iSeal: Encrypted Fingerprinting for Reliable LLM Ownership Verification
Zixun Xiong, Gaoyi Wu, Qingyang Yu, Mingyu Derek Ma, Lingfeng Yao, Miao Pan, Xiaojiang Du, Hao Wang
2025-11-12
2511.08905v1
Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models
Huzaifa Arif, Keerthiram Murugesan, Ching-Yun Ko, Pin-Yu Chen, Payel Das, Alex Gittens
2025-11-11
safety
2511.08484v1
SOM Directions are Better than One: Multi-Directional Refusal Suppression in Language Models
Giorgio Piras, Raffaele Mura, Fabio Brau, Luca Oneto, Fabio Roli, Battista Biggio
2025-11-11
2511.08379v2
‹
1
2
3
4
5
6
...
31
32
33
›