← Back to Library

Scaf-GRPO: Scaffolded Group Relative Policy Optimization for Enhancing LLM Reasoning

Authors: Xichen Zhang, Sitong Wu, Yinghao Zhu, Haoru Tan, Shaozuo Yu, Ziyi He, Jiaya Jia

Published: 2025-10-22

arXiv ID: 2510.19807v1

Added to Library: 2025-11-14 23:08 UTC

📄 Abstract

Reinforcement learning from verifiable rewards has emerged as a powerful technique for enhancing the complex reasoning abilities of Large Language Models (LLMs). However, these methods are fundamentally constrained by the ''learning cliff'' phenomenon: when faced with problems far beyond their current capabilities, models consistently fail, yielding a persistent zero-reward signal. In policy optimization algorithms like GRPO, this collapses the advantage calculation to zero, rendering these difficult problems invisible to the learning gradient and stalling progress. To overcome this, we introduce Scaf-GRPO (Scaffolded Group Relative Policy Optimization), a progressive training framework that strategically provides minimal guidance only when a model's independent learning has plateaued. The framework first diagnoses learning stagnation and then intervenes by injecting tiered in-prompt hints, ranging from abstract concepts to concrete steps, enabling the model to construct a valid solution by itself. Extensive experiments on challenging mathematics benchmarks demonstrate Scaf-GRPO's effectiveness, boosting the pass@1 score of the Qwen2.5-Math-7B model on the AIME24 benchmark by a relative 44.3% over a vanilla GRPO baseline. This result demonstrates our framework provides a robust and effective methodology for unlocking a model's ability to solve problems previously beyond its reach, a critical step towards extending the frontier of autonomous reasoning in LLM.

🔍 Key Points

  • Introduction of Soft Instruction Control (SIC), an iterative prompt sanitization loop to defend against prompt injection attacks in tool-augmented LLM agents.
  • SIC modularly processes untrusted input by rewriting, masking, or removing instructions, ensuring only safe commands reach the agent.
  • Empirical evaluations reveal that SIC achieves a 0% attack success rate (ASR) under a range of adversarial attacks, substantially reducing the risk of compromised agent behavior.
  • SIC maintains high utility on benign tasks while critically engaging with security-utility trade-offs; examples show careful balancing of benign instructions and attack prevention.

💡 Why This Paper Matters

The presented SIC method marks a significant advancement in the defense strategies against prompt injection attacks for Large Language Models integrated within autonomous systems. It provides a practical and effective solution that allows agents to operate securely while interacting with untrusted data, highlighting that security against adversarial inputs can be enhanced significantly without drastic compromises on performance.

🎯 Why It's Interesting for AI Security Researchers

This paper is crucial for AI security researchers as it addresses the emerging vulnerabilities faced by large language models in agentic systems, particularly the risks posed by prompt injection attacks. The systematic approach for mitigating these risks, along with empirical evaluations and comparative analyses against existing defenses, provides valuable insights for current and future research in AI security. It raises awareness about the necessity of robust defenses in deployed AI systems, especially as they become more autonomous and integrated into various applications.

📚 Read the Full Paper