← Back to Library

An Adaptive Multi Agent Bitcoin Trading System

Authors: Aadi Singhi

Published: 2025-10-09

arXiv ID: 2510.08068v2

Added to Library: 2025-11-17 03:01 UTC

📄 Abstract

This paper presents a Multi Agent Bitcoin Trading system that utilizes Large Language Models (LLMs) for alpha generation and portfolio management in the cryptocurrencies market. Unlike equities, cryptocurrencies exhibit extreme volatility and are heavily influenced by rapidly shifting market sentiments and regulatory announcements, making them difficult to model using static regression models or neural networks trained solely on historical data. The proposed framework overcomes this by structuring LLMs into specialised agents for technical analysis, sentiment evaluation, decision-making, and performance reflection. The agents improve over time via a novel verbal feedback mechanism where a Reflect agent provides daily and weekly natural-language critiques of trading decisions. These textual evaluations are then injected into future prompts of the agents, allowing them to adjust allocation logic without weight updates or finetuning. Back-testing on Bitcoin price data from July 2024 to April 2025 shows consistent outperformance across market regimes: the Quantitative agent delivered over 30\% higher returns in bullish phases and 15\% overall gains versus buy-and-hold, while the sentiment-driven agent turned sideways markets from a small loss into a gain of over 100\%. Adding weekly feedback further improved total performance by 31\% and reduced bearish losses by 10\%. The results demonstrate that verbal feedback represents a new, scalable, and low-cost approach of tuning LLMs for financial goals.

🔍 Key Points

  • Introduction of StyleBreak framework to analyze alignment vulnerabilities in LAMs under adversarial audio jailbreak scenarios.
  • Development of a two-stage style-aware transformation pipeline that effectively manipulates linguistic, paralinguistic, and extralinguistic attributes of audio prompts.
  • Incorporation of a query-adaptive policy network that enhances the efficiency and effectiveness of generating adversarial styles for bypassing LAM alignments.
  • Extensive experiments reveal significant vulnerabilities in LAMs, demonstrating increased attack effectiveness across various models and adversarial styles.
  • Demonstration that LAMs are more susceptible to emotional, age, and gender variations in audio, highlighting a critical need for improved model safety alignment.

💡 Why This Paper Matters

The paper is relevant as it sheds light on critical security vulnerabilities in large audio-language models (LAMs) through an innovative approach that utilizes style-aware transformations. By revealing how human speech attributes can be exploited in audio jailbreaks, it emphasizes the imperative for enhancing the robustness of alignment mechanisms in LAMs to ensure safer deployment in real-world applications.

🎯 Why It's Interesting for AI Security Researchers

This research is of great interest to AI security researchers as it provides foundational insights into potential attack vectors that exploit the intersection of audio processing and language models. By understanding how various speech attributes can influence LAM behavior, security professionals can better prepare defenses against these adversarial attacks, thus contributing to more resilient AI systems.

📚 Read the Full Paper