ArcaThread desktop app is still in development and coming soon.

Back to blog

Guide

How We Built an AI That Thinks While You Play

A transparent look at ArcaThread AI Coach: our data pipeline, model training loop, and live in-game recommendation engine for League players.

Feb 27, 2026ArcaThread Team

Quick takeaway: this guide gives you practical build choices and decision points you can apply in your next ranked game.

Want early access? Join the waitlist to try ArcaThread AI Coach before public launch.

ArcaThread AI Coach did not start as a static build guide. We built it as a decision engine designed for the moments where decisions actually matter: champion select, lane tempo, objective setup, item pivots, and post-game review.

This is the technical version of the story. We will walk through the major backend capabilities, how the app consumes model outputs, how live recommendations are generated, and why compliance constraints shaped core design choices from day one.

Why We Moved From "Guide" to "Coach"

Most guides assume a fixed context. League is the opposite.

  • Patches change priorities.
  • Team compositions change risk profiles.
  • Gold swings and objective states change what "correct" means minute to minute.

So we built two layers instead of one:

  1. A continuous data + model layer that updates and retrains.
  2. A fast live decision layer inside the client that reacts to game state.

If you want the product-level overview, see Features, How It Works, and About.

The Architecture: A Continuous Intelligence Loop

Our ML platform is split into modular backend capabilities with explicit responsibilities:

  • A data layer ingests ranked Solo Queue data and prepares stable matchup snapshots.
  • A modeling layer continuously updates decision intelligence.
  • A serving layer keeps fast, practical outputs available for live use.
  • A feedback layer turns completed matches into post-game learning signals.

This separation was intentional. We wanted each part to evolve independently without breaking the full chain.

Data Ingestion: What Enters the System

In our data collection layer, we continuously pull ranked match data via Riot Match V5, normalize patches to major.minor, and aggregate role-aligned 1v1 lane matchups by rank and patch.

The resulting training rows include core fields like:

  • champion vs enemy champion
  • role
  • rank
  • patch
  • win rate
  • sample size
  • advantage score

That output is stored in our internal data layer and becomes the training substrate for downstream models.

Authoritative source for the game API foundation: Riot Developer Portal.

Model Training: Distinct Models for Distinct Decisions

In our training layer, we train separate model families for different decision contexts.

Training runs in a loop and publishes artifacts with metrics for reliability tracking.

A key engineering decision was graceful degradation:

  • Use advanced draft features when sufficient valid rows exist.
  • Fall back to simpler feature sets when data is sparse.

That keeps the system resilient during patch transitions and uneven data availability.

Feature Strategy: Representation + Context Signals

One shared primitive across the stack is champion representation vectors, used in both training and live inference logic.

In advanced feature pipelines, we combine:

  • embedding vectors for both sides of a matchup
  • matchup differential features
  • role signals
  • historical context features
  • scaling and damage profile differences

The objective is practical, not academic: use expressive enough features to improve recommendations, while keeping inference stable and explainable.

App Runtime: From Game State to Decision Output

In the active client path, recommendation flow is:

  1. The app composes the current game state.
  2. A recommendation layer computes stable signatures, caching, and debounced fetch behavior.
  3. A runtime service resolves inference via multiple sources and applies quality-gate logic.

Parallel to that, live analysis flow is:

  1. A live analysis layer receives game state.
  2. It computes anti-heal need, objective priorities, teamfight outlook, and more.
  3. A counter system contributes item adaptation suggestions across key threat categories.

This design keeps recommendations timely without requiring every decision to depend on a remote round-trip.

Want these recommendations live in your matches? Get on the waitlist and we will notify you when the desktop launcher opens.

What "Thinks While You Play" Means in Practice

For us, the phrase means three concrete things:

  1. Time-sensitive updates: recommendations refresh as state changes, not only before game start.
  2. Uncertainty handling: low-confidence contexts should be labeled as such, not presented as fake certainty.
  3. Fallback continuity: if a model artifact or endpoint is unavailable, the user still gets safe baseline options.

You can see this philosophy directly in implementation patterns:

  • short-lived caching with signature-based deduplication
  • pending request registries to avoid duplicate concurrent fetches
  • deterministic fallback pick/ban/shop options
  • telemetry-informed quality gates over time

Post-Game Feedback Loop

We also treat post-game analysis as part of live coaching quality.

Our sync and insights layers ingest match plus timeline data and return:

  • categorized mistakes (for example positioning, objective, recall)
  • performance summaries
  • concise executive takeaways

That creates a loop: game context informs recommendations, and post-game outcomes inform calibration and player learning.

Compliance Was a Core Design Constraint

We only provide recommendations, never automation.

That principle shaped both product and technical design. For example, jungle path and gank timing guidance are statistical-prior based rather than hidden live enemy tracking. The goal is useful guidance within policy boundaries, not edge-case exploitation.

Reference for game API docs: Riot Developer Portal.

Hard Problems We Had to Solve

Three tradeoffs came up repeatedly:

  • Freshness vs stability: faster updates improve relevance but can increase noise.
  • Model power vs reliability: stronger models only help if the delivery chain is robust.
  • Depth vs readability: recommendations need to be explainable in seconds, not essays.

That is why we chose modular services, explicit fallbacks, and constrained output contracts.

What Players Actually Feel in Real Games

A fair question is: what does all this architecture actually change for a player in ranked?

In practice, the intended impact is not "more notifications." It is better decisions with less hesitation:

  • In draft: fewer coin-flip choices and clearer risk/reward tradeoffs.
  • In lane and mid game: better priority between fighting, resetting, objective setup, and farming.
  • In itemization: fewer autopilot builds when enemy composition requires adaptation.
  • After game: short, actionable feedback instead of manually digging through replay timelines.

If we execute well, the AI should feel like a calm second brain, not background noise.

Image Suggestions and ALT Text

Suggested article visuals:

  1. Pipeline diagram: data collection -> model updates -> live app intelligence
  2. Screenshot of live dashboard showing objective and item priorities
  3. Frontend flow diagram for recommendation and live analysis layers

ALT text candidates:

  • "ArcaThread machine learning pipeline from Riot data ingestion to live recommendations"
  • "Live in-game dashboard with AI-prioritized objectives and item choices"
  • "Frontend recommendation flow in ArcaThread desktop client"

Conclusion

If you remember only three points, make it these:

  1. ArcaThread AI Coach is built as a continuous system, not a static guide.
  2. We combine pipeline freshness and local live analysis to balance speed and context.
  3. We prioritize explainable suggestions and policy-safe design over black-box automation.

As launch gets closer, we will publish more deep dives on calibration, confidence gating, and what works best across different ranked contexts.

Want to be first in line? Join the waitlist for early access, release updates, and new technical breakdowns from the ArcaThread team.

Want these builds automatically?

Get early access to the ArcaThread launcher and exclusive build guides.

Conclusion

Use this as a repeatable checklist in champion select and after every patch. Small build and timing improvements compound into steady win rate growth.

Download ArcaThread (Soon)

We value your privacy

We use cookies for analytics and optional ad conversion measurement. You can accept all, reject all, or customize your choices.