Is It Your Traders or Your Liquidity Stack? A Simple “Two-Lens” Test for Execution Complaints
Execution complaints are easy to misdiagnose. A broker sees negative slippage and rejects and concludes “toxic flow.” The LP sees sharp clients and concludes “your clients are toxic.” Meanwhile, the real root cause might be a bridge setting, a routing rule, or one LP behaving poorly in a specific session.
This post gives you a practical method to separate client behavior problems from LP/bridge/aggregation problems using simple comparisons you can run with data you already have.
1) Define the two failure modes (so you stop mixing them)
Before you compare anything, align internally on what you’re trying to prove.
Toxic flow (client behavior problem) typically means the client is systematically extracting execution edge—e.g., latency arbitrage, aggressive news trading, micro-scalping into stale quotes, or highly asymmetric order timing. You’ll usually see the “badness” concentrated in specific accounts and strategies, not evenly across the book.
Bad liquidity (LP/bridge problem) is when execution quality degrades due to venue behavior (last look, throttling, poor fill logic), aggregation/routing choices, bridge configuration, or infrastructure latency. Here, the “badness” often shows up across many accounts at the same time (same symbol, same session, same LP).
Practical takeaway: if you don’t separate who is causing the pattern (a subset of clients vs a subset of venues/paths), you’ll keep treating symptoms.
2) The “Two-Lens” method: compare by client lens and by venue lens
You don’t need a PhD model. You need two views of the same execution metrics.
Pick 3–5 core metrics (keep it consistent):
- Slippage distribution (mean and tails, e.g., 90th/95th percentile)
- Reject rate / requote rate (where applicable)
- Fill ratio / partial fill frequency
- Execution time (client-to-server, server-to-bridge, bridge-to-LP if available)
- Spread at execution vs spread at order receipt (to catch stale/slow feeds)
Then run the same metrics through two lenses:
- Client lens: group by account / strategy tags (scalper, news, EA, manual), holding time buckets, and trade size buckets.
- Venue lens: group by LP, route, symbol, session (Asia/London/NY), and order type.
Interpretation rule of thumb:
- If the worst outcomes cluster around a small set of accounts/behaviors, suspect toxic flow.
- If the worst outcomes cluster around a specific LP/route/session, suspect liquidity stack issues.
3) A simple comparison matrix (what to check in 30–60 minutes)
Use a lightweight matrix to avoid endless debate. Start with a recent window (e.g., last 7–14 days), then validate on a second window.
A. Same clients, different venues
- Compare execution for the same cohort of accounts across LP A vs LP B (or Route 1 vs Route 2).
- If those clients look “fine” on LP B but “toxic” on LP A, it’s usually venue/route behavior (or last look sensitivity) rather than the clients.
B. Same venue, different clients
- Compare LP A performance for “normal” clients vs suspected toxic cohort.
- If LP A is bad for everyone, it’s probably LP quality, bridge settings, or infrastructure.
C. Same symbol, different sessions
- If the problem spikes only around rollover, news windows, or a specific session, it may be:
- LP widening/last look changes
- routing rules that flip during volatility
- capacity/throttling on your bridge or FIX sessions
D. Same order type, different outcomes
- Market orders vs limits/stops can behave very differently.
- If only stop orders show extreme slippage, check stop handling, trigger logic, and whether you’re effectively “chasing” the market via bridge settings.
This matrix is intentionally boring. That’s the point: simple comparisons catch most root causes quickly.
4) How to spot toxic flow patterns (without over-labeling)
“Toxic” should mean systematic edge, not “client made money.” Look for patterns that are hard to explain by market conditions alone.
Common toxic-flow signatures:
- Very short holding times with consistently positive slippage or unusually good entry prices
- Profit concentrated in high-volatility seconds (news releases, data prints) while losses occur in normal periods
- Asymmetric slippage: client gets improvements more often than deteriorations (beyond what your execution model would reasonably produce)
- Venue selectivity: the same client performs far better on one route/LP than others (suggesting they’re exploiting a specific weakness)
Operationally, don’t jump straight to punitive actions. First, validate that the pattern persists:
- Check at least two time windows
- Segment by symbol and time-of-day
- Control for trade size and holding time
If the “toxicity” disappears when you normalize for session/volatility, it may not be toxic flow—it may be your routing reacting poorly to volatility.
5) How to spot bad liquidity/bridge issues (the usual culprits)
When execution degrades broadly, it’s often a stack issue. The key is to isolate whether it’s LP behavior, bridge/aggregation behavior, or infrastructure.
Red flags that point to LP/bridge problems:
- One LP dominates negative slippage tails (e.g., worst 5% outcomes mostly come from LP A)
- Rejects spike with no matching volatility spike (capacity, session limits, or LP filtering)
- Execution time spikes at specific hours (network routing, server load, FIX session instability)
- Spread/feed anomalies: your top-of-book looks tight, but fills come back consistently worse (stale quotes, slow aggregation, or last look)
Practical checks that often pay off:
- Compare LP A vs LP B on the same symbol and same time bucket
- Compare direct LP route vs aggregated route (if you have both)
- Review bridge settings: markup application timing, last look handling, max deviation, order retry logic, and throttling
- Validate hosting and connectivity: are you close to the liquidity hub you’re hitting (e.g., LD4/NY4), and are you saturating bandwidth during peaks?
If you run a hybrid model, also confirm you’re not accidentally mixing A-book execution logic with B-book assumptions (e.g., risk rules that delay or re-route at the worst moment).
6) A practical workflow: from “complaint” to decision in 5 steps
Here’s a repeatable process your dealing/ops team can run without turning every incident into a war room.
Freeze the complaint window
- Pick a time range and symbols (e.g., 2 hours around the incident).
Pull the minimal dataset
- Order timestamp, execution timestamp, symbol, side, size, order type
- Requested price vs fill price, slippage
- Route/LP identifier, reject reason (if any)
- Latency breakdown if available (platform → bridge → LP)
Run the Two-Lens grouping
- Client lens: by account/strategy/holding time
- Venue lens: by LP/route/session
Decide: behavior control or stack fix
- If it’s client-driven: adjust risk rules (routing, max size, news settings), tighten challenge rules for prop, or apply execution protections that match your T&Cs.
- If it’s stack-driven: reweight LPs, adjust SOR logic, fix bridge parameters, or move/optimize infrastructure.
Document and align with compliance
- If you’re changing execution rules, disclosures, or client eligibility, check local regulations and align with your legal/compliance advisors. Execution policy changes should be consistent with your client agreements and marketing claims.
The Bottom Line
“Toxic flow” and “bad liquidity” can look identical in a screenshot—but they don’t look identical in segmented data. Use the Two-Lens method: measure the same execution metrics by client cohort and by LP/route/session.
When the pain clusters around a few accounts, address behavior and risk controls. When it clusters around a venue or route, fix the liquidity stack—LP mix, bridge settings, routing, and infrastructure.
If you want help instrumenting these comparisons inside your execution and risk workflow, talk to Brokeret at /get-started.