โ† Back to Library

Role-Aware Language Models for Secure and Contextualized Access Control in Organizations

Authors: Saeed Almheiri, Yerulan Kongrat, Adrian Santosh, Ruslan Tasmukhanov, Josemaria Vera, Muhammad Dehan Al Kautsar, Fajri Koto

Published: 2025-07-31

arXiv ID: 2507.23465v1

Added to Library: 2025-08-01 04:00 UTC

Red Teaming

๐Ÿ“„ Abstract

As large language models (LLMs) are increasingly deployed in enterprise settings, controlling model behavior based on user roles becomes an essential requirement. Existing safety methods typically assume uniform access and focus on preventing harmful or toxic outputs, without addressing role-specific access constraints. In this work, we investigate whether LLMs can be fine-tuned to generate responses that reflect the access privileges associated with different organizational roles. We explore three modeling strategies: a BERT-based classifier, an LLM-based classifier, and role-conditioned generation. To evaluate these approaches, we construct two complementary datasets. The first is adapted from existing instruction-tuning corpora through clustering and role labeling, while the second is synthetically generated to reflect realistic, role-sensitive enterprise scenarios. We assess model performance across varying organizational structures and analyze robustness to prompt injection, role mismatch, and jailbreak attempts.

๐Ÿ” Key Points

  • The paper introduces role-aware language models specifically designed for enforcing access control in organizational settings, addressing a significant gap in existing methods that typically assume uniform access for users.
  • Three distinct modeling strategies are explored: a BERT-based classifier, an LLM-based classifier, and role-conditioned generation, each demonstrating varying degrees of efficacy and robustness in handling role-specific access permissions.
  • Two novel datasets are constructed - one repurposed and one synthetic - to evaluate the role-awareness of the models under realistic enterprise scenarios, showing high accuracy for role-aware models in maintaining access control.
  • Extensive robustness analysis against prompt injections, role mismatches, and jailbreak attempts illustrates the practical security implications of role-aware models, especially in maintaining information confidentiality within organizations.
  • The study provides insights into the impact of different role encoding strategies on access control performance, highlighting the challenges and trade-offs between accuracy and security.

๐Ÿ’ก Why This Paper Matters

This paper's exploration and validation of role-aware language models contribute substantially to the safety and security of LLM deployments in organizational contexts. By establishing methods to enforce role-specific access control, this work helps mitigate risks associated with unauthorized information disclosure, making it a critical resource as enterprises increasingly adopt AI technologies.

๐ŸŽฏ Why It's Interesting for AI Security Researchers

For AI security researchers, this paper highlights an integral aspect of AI governanceโ€”access control. It provides a novel framework for ensuring that language models not only generate appropriate content but also adhere to security protocols based on user roles. This focus on role-awareness in LLMs is essential given the growing concerns about data privacy, security breaches, and misuse of AI-generated content within organizations.

๐Ÿ“š Read the Full Paper