Back to articles
DocumentationSource: backend/agents/notes/TECHNICAL_AGENT_CONCEPT.md

AUTHOR: Tony Mudau

Technical Analysis Agent Concept

This document explains how the technical_agent works at a conceptual level: what inputs it needs, how it decides, and how it uses memory over time.

Purpose

The agent produces a trade hypothesis (buy or sell) from OHLCV market data using:

  • technical indicator context (RSI, MACD, EMA, ATR)
  • deterministic entry/exit rules
  • lightweight historical memory (pattern win rates)
  • a short user-facing explanation

If market conditions do not satisfy rule gates, it returns None (no trade).

End-to-End Flow

  1. Build market context

    • Requires at least 220 bars and high, low, close columns.
    • Computes indicators and derives:
      • trend from EMA50 vs EMA200 (bullish, bearish, neutral)
      • rsi_state (oversold, neutral, overbought)
      • macd_signal (bullish / bearish)
      • volatility bucket from ATR/price ratio (low, medium, high)
      • recent range (high, low, lookback bars)
  2. Load memory

    • Reads compact files from backend/agents/memory:
      • <SYMBOL>_trades.json
      • <SYMBOL>_patterns.json
    • If trades exist but patterns are missing, patterns are regenerated from trades.
  3. Select relevant patterns

    • Patterns are scored by context match quality plus stored win rate.
    • Top-scoring patterns are passed into hypothesis generation.
  4. Generate trade hypothesis

    • Entry direction is rule-based:
      • buy: bullish trend + bullish MACD (or bearish reversal case), while not overbought
      • sell: bearish trend + bearish MACD, while not oversold
    • If no valid direction, returns None.
  5. Risk levels and confidence

    • Stop distance = max(ATR * 1.5, price * 0.001).
    • Take profit uses 2:1 reward-to-risk.
    • Confidence combines:
      • historical pattern win rate
      • current indicator alignment with direction
      • trend strength (EMA50-EMA200 separation)
  6. Explain and log

    • Produces deterministic explanation, optionally refined by LLM if configured.
    • Logs accepted hypothesis to notes + SQLite decisions table.

Memory Model

The agent keeps memory intentionally small and task-focused.

Trade memory (<SYMBOL>_trades.json)

Each stored row is minimal:

  • t timestamp
  • direction
  • trend
  • rsi_state
  • macd_signal
  • volatility
  • won (boolean outcome)

Only the most recent N records are kept (default cap: 300).

Pattern memory (<SYMBOL>_patterns.json)

Patterns are aggregated buckets from trade memory by:

  • direction + trend + rsi_state + macd_signal + volatility

Each pattern stores:

  • name (example: sell_bearish_neutral_bearish_low)
  • direction
  • context fields above
  • sample_size
  • win_rate
  • updated_at

This gives the next analysis exactly what it needs without storing large raw histories.

Why memory_not_available appears

pattern_used becomes memory_not_available when no relevant patterns are loaded (for example, first run before outcomes are recorded).

After recording outcomes and refreshing patterns, pattern_used should reflect a learned pattern name.

How memory is updated

Use record_trade_outcome(symbol, context, direction, won=...) after a trade closes.

That function:

  1. writes a compact trade outcome
  2. enforces memory size cap
  3. refreshes aggregated pattern memory

So each subsequent analyze() call can leverage current historical performance.

Design Principles

  • Deterministic first: core decision is rule-based and reproducible.
  • Memory as calibration: history influences confidence, not core signal gating.
  • Small footprint: store only fields needed for future inference.
  • Resilience: memory read/write errors do not crash inference; agent still returns best-effort output.