Most "AI for investors" products are a chat box with a stock ticker next to it. You type a question, a single model writes back a paragraph, and whatever magic happened in between is opaque to you. That's not what finqtAI is.
finqtAI is three hard problems wearing one interface:
- A multi-agent orchestration system that routes each question to specialists and reconciles their answers.
- A real-time data pipeline that covers every crypto exchange and stock venue we support, with freshness guarantees per data type.
- A user interface designed to get out of your way so the first two pillars actually land as better decisions instead of more noise.
If any one of those pieces is mediocre, the product is mediocre. This post is the long version of how all three work, what they cost, and where they fail. If you're deciding whether to take finqtAI seriously — or you want the honest answer when a competitor's demo looks similar — this is the document to read.
Related reading, for context on the narrower "chart reading" question: How finqtAI chart reading actually works. The post you're reading now is the broader system view.
Pillar 1: multi-agent orchestration
Why one model isn't enough
The honest reason single-model AI products feel thin for investing is that investing questions aren't one question. "Is NVDA a good buy here?" contains at least six sub-questions stacked on top of each other:
- What does the chart structure actually say? (trend, levels, volume profile)
- Where is capital flowing in and out of the name right now?
- Who is positioned long and short, and how crowded is that positioning?
- What is sentiment across news, social, and search doing?
- What macro regime are we in, and does that regime favor this sector?
- Given all of the above, what's the risk of a specific entry at a specific price?
A single frontier model can take a swing at all six in one prompt, and it will produce something that reads confident. But "reads confident" is not the same as "is right about each layer." What you actually want is a specialist for each of those questions, answering only its part, and an orchestrator that combines the specialist answers into one coherent view.
That's what finqtAI is.
The agent lineup
finqtAI runs a roster of specialist agents. Each one owns a narrow slice of the problem and has its own tools, data access, and prompt contract.
- Structure Agent — the chart reader. Trend, highs/lows, support/resistance zones, consolidation boundaries, volume profile. Outputs structural facts, not predictions.
- Flow Agent — reads order flow, large-trade prints, and where applicable options activity. Surfaces accumulation or distribution patterns relative to recent baselines.
- Positioning Agent — looks at who holds the name and how crowded the trade is. Outputs positioning extremes and the path-of-least-resistance read that falls out of them.
- Sentiment Agent — aggregates news tone, social volume, and search trends. Outputs extremes (euphoria, capitulation) rather than a score, because the mean of sentiment is noise and the tails are signal.
- Macro Agent — owns the regime picture: rate expectations, central bank posture, currency pressure, commodity cycles. Answers "does today's regime favor this sector?" rather than predicting macro events.
- Risk Agent — last in line. Takes the composite view and asks the uncomfortable questions: what's the drawdown tolerance this thesis requires, what invalidates it, where would a reasonable stop sit, what's the expected value if you're wrong.
- Narrator — the final agent. It doesn't produce new analysis. It takes every specialist's output, resolves disagreements explicitly, and writes the one-page view you actually read.
We deliberately did not build a "Prediction Agent." None of the six feeding agents predicts price, because none of them can. Pattern recognition plus contextual framing is not prediction, and we're not going to pretend otherwise. See the earlier post on this: How finqtAI chart reading actually works.
How the orchestrator routes work
When you run an analysis, an orchestrator (not a user-facing agent — a piece of routing logic) decides:
- Which agents are needed for this question. "Read this chart" doesn't need the Macro Agent. "Is tech rolling over here?" needs Macro, Flow, and Positioning but not Structure.
- What budget each agent gets — compute, context length, tool calls. Cheaper questions get shorter contexts and fewer tool calls. This is why the credit model isn't a ripoff: a complex cross-market question genuinely costs more to answer than a single-name chart read, and the credit cost reflects it.
- What order agents run in. Some agents depend on others. The Risk Agent can't score a thesis until Structure, Flow, Positioning, Sentiment, and Macro have spoken. Where there are no dependencies, agents run in parallel.
- When to short-circuit. If the Structure Agent reports "not enough price history to analyze this asset," the orchestrator doesn't waste compute running the other five. You get a fast, honest "we can't answer this" instead of a hallucinated paragraph.
That routing layer is where most of the quality lives. A mediocre orchestrator with frontier models produces worse output than a good orchestrator with smaller models, because it burns budget on irrelevant work and lets agents hallucinate outside their competence.
How conflicts get resolved
The Narrator is the most important agent in the system, and it's the one users think least about. Its job is to resolve cases where specialists disagree — because they will.
Three common disagreement shapes:
- Flow says accumulation, Sentiment says capitulation. These are often both true — smart-money accumulation into retail capitulation is a well-documented pattern. The Narrator names the pattern instead of picking a winner.
- Structure says bullish continuation, Positioning says overcrowded long. Both can be right in the short run and wrong in the medium run. The Narrator flags the tension explicitly and lets Risk size accordingly.
- Macro says risk-off, Flow says sector rotation into this name. Stock-specific flow can override macro in narrow windows. The Narrator qualifies the time horizon.
We force the Narrator to show its work — not in some marketing sense, but literally: the final output includes a "disagreements" section when specialists didn't converge, naming what each agent said and why the final view weighted them the way it did. That's not a UI choice we made to feel transparent. It's the only way you can argue with the answer, and the whole tool is designed to be argued with.
The verification and refusal layer
Before the Narrator's output reaches you, a verifier checks three things:
- Is every numerical claim sourced to a specific agent's output? If a number appears in the final view that doesn't trace back to a specialist, it gets stripped. This is our main defense against the failure mode where the Narrator hallucinates a confidently-specific statistic.
- Are the named patterns ones the Structure Agent actually identified? If the final view mentions "descending triangle" but Structure didn't call one, that phrase gets stripped.
- Is the hypothesis falsifiable? We require the final view to name something that would invalidate the read. A thesis you can't be wrong about isn't a thesis.
If verification fails and can't be repaired, the agent refuses the analysis with an explanation. You don't get charged a credit for a refused analysis, and you do get told why. This is rarer than you'd think, and when it happens it's usually because the requested asset has thin data — exactly the case where a confident-sounding answer would be most dangerous.
What gets cached, what gets recomputed
Not every agent query costs the same. Structure analysis on the same chart five minutes apart barely changes — it's cacheable. Flow analysis five minutes apart can change materially if there's been a large print. Sentiment turns over slowly during the session and fast around catalysts.
The orchestrator caches agent outputs with per-agent TTLs calibrated to how fast that layer actually moves. So asking finqtAI about NVDA at 10:14 and again at 10:17 doesn't re-run everything from scratch; it reuses the stable layers and recomputes the volatile ones. This is why the credit cost for adjacent analyses is often lower than a naive "N credits × N calls" model would predict — we're not charging you to re-read the same chart six times.
Pillar 2: real-time data from every market we support
Multi-agent orchestration is only as good as the data it stands on. If the Flow Agent is looking at stale quotes, its output is worse than useless — it's confidently wrong. So the second pillar is a real-time data layer that covers every market finqt supports and gives each agent a freshness guarantee for its data type.
The coverage claim, unpacked
finqt covers 20+ crypto exchanges and the major stock venues — NASDAQ, NYSE, HKEX, and the rest. Details in integrations and the longer piece on how these connections work: How finqt connects to 20+ exchanges.
For finqtAI, "cover" means three very specific things:
- Every supported venue streams into the same normalized tape — same schema, same timestamp convention, same instrument identifiers. An agent asking "what's the price" doesn't need to know whether the answer comes from a Binance stream or a HKEX feed.
- Each data type has an explicit freshness contract. Quotes are real-time. Flow aggregations have a stated bucket window. Positioning has a stated update cadence. Sentiment is stated-as-of. The agents don't get raw data; they get data with a "stamped-at" timestamp they must acknowledge in their output.
- Stale data is a first-class state. If a venue's feed drops, the assets from that venue get marked stale, and agents that would have used them must either ask for a different source or refuse. They do not silently reason over last-known values.
The third point is the one most competitors fail. It is genuinely hard to build a system that would rather say "I don't know" than produce a plausible-looking answer from stale inputs.
The data path
End to end:
- Ingest — direct exchange connections for crypto, licensed market data for equities. No scraping, no consumer APIs. If a feed can go through an exchange's official channel, it does; that is the only way the freshness contracts hold.
- Normalize — every tick, print, and book update gets mapped to a single internal schema. Instrument IDs are canonical, not venue-specific. This normalization layer is boring and also the single biggest reason cross-market analysis works at all.
- Enrich — derived layers (flow aggregates, positioning snapshots, sentiment classifications, macro features) get computed on top of the normalized tape with their own cadences and stamps.
- Broadcast — agents subscribe to the slices they need. The Structure Agent pulls historical OHLCV at whatever resolution it needs; the Flow Agent subscribes to real-time prints; the Positioning Agent hits snapshots on their refresh cadence; Sentiment pulls from its own rolling window.
None of this layer is visible to you. That's the point. You shouldn't have to think about data plumbing to run an analysis. But when an agent says "flow has been accumulating since the open," the claim traces back to specific prints on specific venues with specific timestamps, not to a vibes-based summary.
Latency, honestly
finqtAI is not a microsecond-latency trading system and isn't marketed as one. What it actually is:
- Quotes are real-time within normal network bounds — sub-second end-to-end for the vast majority of supported venues.
- Flow aggregates reflect activity through the last completed bucket window. The bucket size depends on the asset's liquidity; liquid names have finer buckets, illiquid ones coarser.
- Positioning refreshes on the cadence of the underlying source. Some sources update multiple times per session; others are end-of-day. The agent output always names the as-of stamp.
- Sentiment is a rolling window; the agent names the window it's reading.
- Macro changes infrequently; macro features refresh on scheduled intervals and when there's a triggering event.
If you're doing HFT, finqt is not your tool. If you're making position decisions on a minutes-to-days horizon — which is where the vast majority of active retail and professional-discretionary investing actually lives — the latency envelope is comfortably inside the envelope that matters for your decisions.
How agents see the unified tape
One of the concrete payoffs of the normalization layer: cross-market questions become cheap. "How is semiconductor equity flow correlating with TSMC ADR options activity and with KRW weakness today?" is three markets (US equities, US options, FX) and three data types (flow, options, currency). Because every input shares a schema and a clock, the orchestrator can route it to the right specialists in parallel and the Narrator can combine their outputs without stitching mismatched timestamps.
An agent that has to reconcile "this venue's 10:14:02 UTC with that venue's 10:14:02-local" on every call will either get it wrong or spend all its context window on the reconciliation. Neither is acceptable. We do the reconciliation once, at ingest, and then nobody downstream has to think about it.
Pillar 3: a clean UI — why it matters
People underestimate how much of an AI product's real value is thrown away at the interface. You can have the best orchestrator in the world feeding the best data pipeline in the world, and if the output lands in a cluttered screen the user can't parse, the net effect is zero.
This is the pillar we spent the longest on, and the one with the least visible work.
The cognitive cost of clutter
Every element on the screen has a cost. Not just pixel cost — attention cost. Your working memory can hold a small number of things at once; every chart, ticker, metric, notification, and button competes for that space. Good interfaces minimize the attention tax of whatever you're not currently doing. Bad interfaces pretend every element is equally important, which means in practice nothing is.
The canonical "professional investor UI" is the Bloomberg Terminal. It puts hundreds of pieces of information on the screen at once, and the users love it, and they also train for months to use it. Bloomberg is the right interface for people whose day job is Bloomberg. It is the wrong interface for everyone else, and copying it (as many retail tools do) just lands half the density and none of the training.
finqtAI had to do the opposite: surface one decision at a time, with the reasoning legible, and hide everything else until it's asked for.
The three rules we actually follow
- One answer at a time, with receipts. When you ask finqtAI a question, the output is one coherent view, not a dashboard of loose metrics. Below the view, there's always a "show reasoning" affordance that expands into the specialist outputs. You can stay at the summary level, or you can drill down to the Flow Agent's exact prints and the Positioning Agent's exact crowdedness metric. Both states are supported; neither state is the default that forces the other on you.
- Absolute honesty about uncertainty. Every output states what it knows, what it doesn't, and when its data was last fresh. A sentence like "sentiment is extreme, as of 18:42 UTC, based on 6-hour rolling window" is more verbose than "sentiment is extreme" but it's the only version you can act on responsibly. We chose the verbose version everywhere it matters.
- No theatrical charts. Every chart on the screen is there because an agent used it or a decision needs it. No decorative sparklines. No "look at this beautiful gauge." If a visualization doesn't carry information a user needs for this specific decision, it's not on the screen.
What we explicitly cut
A partial list of things that did not make it into finqtAI, in the name of not making the interface louder:
- Confidence percentages as big numbers. "87% bullish" is worse than "bullish, with these caveats." The false precision makes people weight the claim more than it deserves.
- Multiple competing narratives side-by-side. Disagreements between agents are surfaced inside one narrative, not as two narratives you have to arbitrate.
- Streaming text as the only output mode. Streaming looks impressive and often interferes with comprehension. Final outputs in finqtAI arrive as a single legible block with optional progressive-disclosure sections.
- Push notifications for individual agents firing. You don't need to know that the Flow Agent finished before the Sentiment Agent. You need to know when the final view is ready.
- "AI typing" indicators. Theater. You either have an answer or you don't.
Every one of those was tempting. Every one of those would have made the product feel more "AI-y" in a demo. Every one of those makes the real workflow worse. So we cut them.
The receipts view, in practice
The best evidence that the UI philosophy is working is the reasoning view. When you expand it, you see:
- Which agents ran for this analysis.
- What each agent reported, in its own brief voice.
- Where agents disagreed, and how the Narrator weighted them.
- The data freshness stamps per agent.
- The verifier's check outcomes.
Most users never open the reasoning view. That's fine. The ones who do are the ones building real intuition — they learn where the Flow Agent is strong, where the Sentiment Agent is soft, and how their own reads compare. That's exactly the user we want to grow. The interface is structured so a skeptical user becomes a trusting-but-skeptical user over weeks, not a blind follower in minutes.
How the three pillars compound
None of these three pillars is remarkable on its own. Multi-agent orchestration is a known pattern. Real-time market data has existed at the terminal for decades. Clean interface design is older than software. What's unusual is the combination.
A multi-agent system without a real-time backbone is a book report on yesterday's market. A real-time data layer without orchestration is a firehose you can't drink from. Either of those without a clean interface is a tool that technically works and that nobody actually benefits from.
The three together are a tool that does one thing very well: it gives an active investor a second opinion, grounded in real-time evidence from every market they trade, delivered in a form that is actually legible. That's not magic. It's three hard problems, each solved deliberately, wired together.
The limits, named explicitly
Because we keep insisting on this in every post: finqtAI has hard limits, and they're part of the product.
- It does not predict prices. The agent roster is deliberately missing a Prediction Agent, because prediction is not a thing that generalizes in markets. Anyone selling you a price-prediction agent is selling you a slot machine with a chart.
- It cannot see information you have and the model doesn't. If you know a specific counterparty is a forced seller next Thursday, that's alpha finqtAI cannot access. Keep your edge.
- It will refuse on thin data. Assets with too little price history, too little flow, or missing venues will get a refusal, not a made-up answer.
- It is decision support, not decision making. Your thesis, your sizing, your stop, your journal. The AI is an input. You are the decider.
These are not bugs to be fixed in a future release. They are the shape of honest AI analysis. If we ever ship a product that promises the opposite, we've lost the plot.
Pricing and the credit model
finqtAI runs on credits rather than "unlimited AI," because genuine orchestration with real-time data is expensive to run per call. Pro includes 100 credits per month, Pro+ includes 300, and top-up packs never expire. The credit cost per analysis scales with how many agents the orchestrator actually ran, so simple questions are cheap and cross-market questions cost more. Full breakdown: pricing.
Frequently asked questions
How many agents does finqtAI run at once?
Up to seven on the deepest analyses: Structure, Flow, Positioning, Sentiment, Macro, Risk, and the Narrator. Most analyses run a subset — the orchestrator only activates the specialists the question actually requires, which is why credit costs vary.
Can I see which agents ran and what each one said?
Yes. The reasoning view on any analysis shows the roster for that call, each agent's brief output, data freshness stamps, and any resolved disagreements between specialists. Most users never open it; the ones who do learn the tool faster.
What happens if a data source goes stale during an analysis?
Assets from the affected venue are marked stale. Agents that would have relied on that source either switch to an alternative or refuse to answer, and the Narrator explicitly reports the degraded state instead of silently reasoning over last-known values. Related: How finqt connects to 20+ exchanges.
Is finqtAI fast enough for day trading?
finqtAI is built for minutes-to-days horizon decisions, not for microsecond-latency trading. Quotes are real-time, but the orchestration itself takes seconds per analysis by design — quality over speed. If you need microsecond execution, you need a different category of product.
Does the multi-agent system make the output slower than a single model?
Slightly, yes — orchestration has overhead. We think the overhead is worth it because the alternative is one model hallucinating across six domains it isn't specialized in. Fast-and-wrong is strictly worse than slightly-slower-and-right for decisions you put real money behind.
Why so many design constraints on the UI?
Because an AI tool you can't read is an AI tool that doesn't work. Clutter is the enemy. We'd rather ship a quieter interface that leaves room for your thinking than a louder one that crowds it out.
Where's the best place to start?
Download finqt, connect one asset or exchange, and run finqtAI on something you already have a view on. Compare its read to yours. If it caught something you missed, you've learned something. If you caught something it missed, you've learned something. Either way, the workflow works — which is the whole point.