AUTHOR: Tony Mudau
Technical Analysis Agent Concept
This document explains how the technical_agent works at a conceptual level: what inputs it needs, how it decides, and how it uses memory over time.
Purpose
The agent produces a trade hypothesis (buy or sell) from OHLCV market data using:
- technical indicator context (RSI, MACD, EMA, ATR)
- deterministic entry/exit rules
- lightweight historical memory (pattern win rates)
- a short user-facing explanation
If market conditions do not satisfy rule gates, it returns None (no trade).
End-to-End Flow
-
Build market context
- Requires at least 220 bars and
high,low,closecolumns. - Computes indicators and derives:
trendfrom EMA50 vs EMA200 (bullish,bearish,neutral)rsi_state(oversold,neutral,overbought)macd_signal(bullish/bearish)volatilitybucket from ATR/price ratio (low,medium,high)- recent range (
high,low, lookback bars)
- Requires at least 220 bars and
-
Load memory
- Reads compact files from
backend/agents/memory:<SYMBOL>_trades.json<SYMBOL>_patterns.json
- If trades exist but patterns are missing, patterns are regenerated from trades.
- Reads compact files from
-
Select relevant patterns
- Patterns are scored by context match quality plus stored win rate.
- Top-scoring patterns are passed into hypothesis generation.
-
Generate trade hypothesis
- Entry direction is rule-based:
buy: bullish trend + bullish MACD (or bearish reversal case), while not overboughtsell: bearish trend + bearish MACD, while not oversold
- If no valid direction, returns
None.
- Entry direction is rule-based:
-
Risk levels and confidence
- Stop distance =
max(ATR * 1.5, price * 0.001). - Take profit uses 2:1 reward-to-risk.
- Confidence combines:
- historical pattern win rate
- current indicator alignment with direction
- trend strength (EMA50-EMA200 separation)
- Stop distance =
-
Explain and log
- Produces deterministic explanation, optionally refined by LLM if configured.
- Logs accepted hypothesis to notes + SQLite decisions table.
Memory Model
The agent keeps memory intentionally small and task-focused.
Trade memory (<SYMBOL>_trades.json)
Each stored row is minimal:
ttimestampdirectiontrendrsi_statemacd_signalvolatilitywon(boolean outcome)
Only the most recent N records are kept (default cap: 300).
Pattern memory (<SYMBOL>_patterns.json)
Patterns are aggregated buckets from trade memory by:
- direction + trend + rsi_state + macd_signal + volatility
Each pattern stores:
name(example:sell_bearish_neutral_bearish_low)direction- context fields above
sample_sizewin_rateupdated_at
This gives the next analysis exactly what it needs without storing large raw histories.
Why memory_not_available appears
pattern_used becomes memory_not_available when no relevant patterns are loaded (for example, first run before outcomes are recorded).
After recording outcomes and refreshing patterns, pattern_used should reflect a learned pattern name.
How memory is updated
Use record_trade_outcome(symbol, context, direction, won=...) after a trade closes.
That function:
- writes a compact trade outcome
- enforces memory size cap
- refreshes aggregated pattern memory
So each subsequent analyze() call can leverage current historical performance.
Design Principles
- Deterministic first: core decision is rule-based and reproducible.
- Memory as calibration: history influences confidence, not core signal gating.
- Small footprint: store only fields needed for future inference.
- Resilience: memory read/write errors do not crash inference; agent still returns best-effort output.