← Back to Library

Springdrift: An Auditable Persistent Runtime for LLM Agents with Case-Based Memory, Normative Safety, and Ambient Self-Perception

Authors: Seamus Brady

Published: 2026-04-06

arXiv ID: 2604.04660v1

Added to Library: 2026-04-07 03:00 UTC

Safety

📄 Abstract

We present Springdrift, a persistent runtime for long-lived LLM agents. The system integrates an auditable execution substrate (append-only memory, supervised processes, git-backed recovery), a case-based reasoning memory layer with hybrid retrieval (evaluated against a dense cosine baseline), a deterministic normative calculus for safety gating with auditable axiom trails, and continuous ambient self-perception via a structured self-state representation (the sensorium) injected each cycle without tool calls. These properties support behaviours difficult to achieve in session-bounded systems: cross-session task continuity, cross-channel context maintenance, end-to-end forensic reconstruction of decisions, and self-diagnostic behaviour. We report on a single-instance deployment over 23 days (19 operating days), during which the agent diagnosed its own infrastructure bugs, classified failure modes, identified an architectural vulnerability, and maintained context across email and web channels -- without explicit instruction. We introduce the term Artificial Retainer for this category: a non-human system with persistent memory, defined authority, domain-specific autonomy, and forensic accountability in an ongoing relationship with a specific principal -- distinguished from software assistants and autonomous agents, drawing on professional retainer relationships and the bounded autonomy of trained working animals. This is a technical report on a systems design and deployment case study, not a benchmark-driven evaluation. Evidence is from a single instance with a single operator, presented as illustration of what these architectural properties can support in practice. Implemented in approximately Gleam on Erlang/OTP. Code, artefacts, and redacted operational logs will be available at https://github.com/seamus-brady/springdrift upon publication.

🔍 Key Points

  • Springdrift introduces a persistent runtime for long-lived LLM agents, emphasizing full operational auditability through append-only memory and cycle-level decision logging, which enhances trust and accountability.
  • It integrates continuous ambient self-perception via a structured self-state representation (sensorium) that provides the agent with real-time awareness of its operational context and performance.
  • The system employs a case-based reasoning layer which outperforms traditional dense retrieval models in terms of information retrieval by enabling temporal continuity in memory and successful past interactions.
  • A deterministic normative calculus is implemented for safety gating, allowing the agent to evaluate decisions based on defined ethical principles, creating an auditable decision-making trail.
  • The study proposes a novel category for AI systems termed 'Artificial Retainer,' highlighting the potential for agents to act autonomously with bounded authority while maintaining a forensic audit trail.

💡 Why This Paper Matters

The paper presents Springdrift as an innovative framework for developing accountable and persistent LLM agents that address critical limitations of existing session-bounded systems. Its contributions are significant as they not only advance the architecture of AI systems but also establish foundational principles for trust and long-term cooperative interactions with human operators.

🎯 Why It's Interesting for AI Security Researchers

This paper is of great interest to AI security researchers as it tackles the pressing issue of trust in AI systems through enhanced auditability and decision-making transparency. The integration of deterministic ethical reasoning and the ability for agents to reject harmful instructions based on normative commitments provide significant insights into creating safe and reliable AI, which is crucial in preventing misuse and ensuring ethical compliance in autonomous systems.

📚 Read the Full Paper