AUTHOR: Tony Mudau
Lightweight Candidate Checks (Detailed Notes)
This note explains the lightweight candidate-check system used before full trade execution logic.
It documents the concepts, formulas, adaptive mechanisms, and telemetry feedback loop currently implemented in the orchestration layer.
1) Why lightweight candidate checks exist
In a multi-symbol run cycle, evaluating every symbol with full analysis is expensive and can produce low-quality entries when market conditions are temporarily poor (wide spread, dead tape, noisy micro-volatility).
The lightweight candidate-check stage solves this by:
- quickly rejecting clearly bad symbols early,
- reserving full analysis for plausible candidates,
- preserving speed under multi-symbol scans,
- improving overall quality of selected symbols.
This is a ranking and filtering stage, not a final trade-direction engine.
2) Pipeline position in run cycle
For each symbol, candidate evaluation follows this flow:
- Resolve broker symbol and read fast tick.
- Run fast prefilter checks (spread and short-term market aliveness).
- Fetch full entry/H1/D1 bars only if prefilter passes.
- Build technical + market contexts.
- Apply quality gates (confidence, EV, validation, macro conflict).
- Compute combined candidate score.
- Enforce score floor and mark candidate/reject.
The best surviving candidate is selected for downstream portfolio/risk/execution stages.
3) Core concepts in fast prefiltering
3.1 Fast spread sanity
The system computes:
price_fastfrom ask/bidspread_fastspread_ratio_fast = spread_fast / price_fast
Rather than fixed global thresholds only, spread checks now use adaptive limits:
spread_cap(soft concern region)spread_hard_reject(hard reject region)
These are adapted by symbol profile, session, and quick volatility baseline.
3.2 Quick market aliveness check
A small OHLCV window (quick bars) is used to measure whether the market is structurally active enough:
- quick range ratio from mean
(high - low) / price
Instead of relying on one fixed dead-market cutoff, the system uses:
alive_floor(soft region)alive_hard_reject(hard reject)
Higher-timeframe quick context (H1 quick bars) is used to avoid false rejection when short-term bars look quiet but structure exists on higher timeframe.
4) Adaptive thresholding logic (anti-brittle design)
The old brittle issue was fixed by replacing hard constants with adaptive bounds.
Inputs used for adaptation
- Symbol trade profile (
profile_for_symbol) - Session label (
asia,london,new_york,off_hours) - Quick volatility from entry timeframe
- Quick volatility from H1
- Recent rolling reject telemetry (symbol/session)
Practical behavior
- During difficult regimes (high reject-rate), thresholds relax slightly to avoid over-filtering.
- During very permissive regimes (low reject-rate), thresholds tighten slightly to avoid low-quality entries.
- All adaptive values are bounded so system behavior remains stable.
5) Precheck scoring concept
After passing fast prefilter, the symbol gets a precheck score from market quality factors.
Historically this score used:
- spread quality,
- volatility suitability,
- tick-volume liquidity proxy.
Tick-volume reliability correction
Spot FX tick volume quality varies by broker/feed.
To reduce noise, the system now computes a liquidity reliability estimate from the relationship between:
- tick volume,
- price-range dynamics.
Then liquidity contribution is downweighted when reliability is weak, keeping precheck score driven more by robust spread/volatility signals.
6) Combined candidate score concept
A symbol that passes quality checks receives a combined score built from:
- technical confidence (or EV-blended effective confidence),
- market/macro confidence blend,
- precheck score,
- MTF alignment score,
- expected-value component (from win-rate and R:R),
- macro alignment/conflict filter adjustment.
This score is used for:
- filtering low-quality symbols,
- ranking the remaining symbols,
- selecting the best candidate for the cycle.
7) Dynamic score floor (replacing fixed 0.6 gate)
Instead of always enforcing combined_score >= 0.6, the system computes a bounded adaptive floor:
- stricter in riskier contexts,
- softer in strong trend-compatible contexts,
- influenced by session and precheck quality,
- adjusted by rolling reject telemetry.
This reduces both:
- false rejects when conditions are structurally tradable,
- false accepts when environment is noisy.
8) Telemetry feedback loop (self-tuning)
To make adaptation data-driven, each symbol evaluation writes a telemetry row:
- symbol
- session label
- outcome (
candidate,rejected_prefilter,rejected_quality,rejected_score,error) - rejection reason (if any)
- precheck score
- combined score
- timestamp
Rolling reject statistics
During new evaluations, rolling reject rates are computed for:
- symbol+session window,
- session fallback window.
These rates are then fed into:
- adaptive prefilter limits,
- adaptive combined-score floor.
This creates a lightweight self-tuning control loop without requiring heavy model retraining.
9) Candidate-check rejection categories
Conceptually, rejects are grouped by stage:
-
prefilter rejects
obvious market-quality issues (extreme spread, dead market) -
quality rejects
low-confidence/no-direction/invalid SL-TP/negative EV/news conflict -
score rejects
candidate exists but total quality/ranking score is below dynamic floor
Each category has different implications and should be monitored separately.
10) Why this architecture is efficient
- Uses cheap checks first, expensive checks later.
- Reduces full-analysis load across symbol universe scans.
- Separates market-quality gating from trade-direction logic.
- Learns from rolling outcomes through telemetry rather than static constants.
- Keeps adaptation bounded to avoid unstable behavior shifts.
11) Known limitations and guardrails
Even with adaptive checks, limitations remain:
- Tick-level microstructure noise can still cause occasional false rejects/accepts.
- Session transitions can produce temporary spread anomalies.
- Telemetry reacts to recent history; very sudden regime shifts may lag.
- Over-adaptation risk is mitigated by clamped threshold ranges and fallback defaults.
The design goal is robust improvement, not perfect filtering.
12) Suggested operational monitoring
For healthy operation, track:
- reject-rate by symbol/session,
- candidate pass-rate over time,
- score distribution before/after adaptive floor,
- downstream execution quality of selected candidates.
If reject-rates drift too high or too low persistently, tune:
- telemetry lookback window,
- adaptation step sizes,
- min/max clamps for floors and prefilter thresholds.
13) Summary
The lightweight candidate-check system now uses:
- fast market sanity checks,
- adaptive thresholding,
- reliability-aware scoring,
- dynamic score floors,
- rolling telemetry feedback.
This moves the system from static hard-threshold filtering to a more resilient, self-correcting candidate selection process while remaining computationally lightweight.