Infrastructure / Desktop

A.O.N.

Run agent squads with operational control.

A.O.N. turns a single objective into scoped parallel work with mission planning, isolated execution, honest grading, and replayable evidence. Builder, Tester, Critic, Architect, Copilot, and Observer agents all work inside an inspectable command layer.

Development alphaTarget: late this yearPricing: TBD

Without orchestration, multi-agent work fails in familiar ways.

A.O.N. exists for teams that want more than prompt fan-out. It keeps work coordinated, inspectable, and bounded enough to trust.

Coordination

Multiple agents touching the same codebase without a shared plan become a human routing problem. A.O.N. decomposes the mission, scopes every assignment, and prevents squads from stepping on the same surface.

Accountability

Confident language is not verification. A.O.N. grades findings, gates synthesis on verified output, and makes severity visible so incomplete work does not sneak through as a finished answer.

Visibility

If you cannot see what changed, who changed it, and why, you are operating on trust alone. A.O.N. keeps the full trail: manifests, events, file diffs, timelines, and replayable context.

Scale

One agent plus one supervising human does not meaningfully scale. A.O.N. gives you a structure for parallel agents, human checkpoints when confidence drops, and a path toward repeatable autonomous operations.

A.O.N. response

One mission in. Structured work, verified output, and clear evidence out.

Every agent receives a role, a working directory, a manifest, and a tool boundary. Every result comes back with grading, diffs, and trace lineage you can inspect before anything ships.

From intent to after action report.

The system stays disciplined by translating plain language into a plan, fanning work out in parallel, and reconciling everything through verification rather than vibes.

01 / Mission

Mission intake

Describe the objective in plain language: audit a repository, rebrand a UI, or generate a test surface. A.O.N. treats the human as strategist and translates intent into an executable mission.

02 / Plan

Work plan generation

The orchestrator scans the target, identifies the relevant files, and produces per-file or per-scope assignments. Each unit of work is paired with the right class such as Builder, Tester, Critic, or Architect.

03 / Execute

Parallel squad execution

Agents run in parallel with isolated job directories, structured manifests, and bounded turn limits. Tool access is scoped by role, provider choice can vary per mission, and ambiguity is escalated instead of guessed through.

04 / Verify

Synthesis and AAR

Outputs are collected, graded, and synthesized into an After Action Report. Findings are tagged by severity, file changes show before and after state, and trace lineage remains available for replay.

Every important move stays inspectable.

A.O.N. is not a wrapper around chat. It is an orchestration layer built for parallel agent work, evidence capture, and deliberate human oversight.

Planning

Mission-driven work plans

Natural-language objectives become structured assignments. A.O.N. scans the target, maps the relevant files, and assigns the right agent class before execution begins.

Parallelism

Parallel agent squads

Fan work out across multiple agents running simultaneously. Each assignment receives an isolated job directory, a structured input manifest, and explicit execution limits.

Specialization

Six agent classes, six operating modes

Builder writes, Tester verifies, Critic reviews, Architect designs, Copilot handles general work, and Observer monitors without modifying. The role is chosen to fit the mission, not forced after the fact.

Verification

Honest self-grading

Every output carries a confidence score. Synthesis reports grade findings by severity such as Critical, Warning, and Info, making self-correction possible without pretending uncertainty does not exist.

Evidence

Before and after file previews

Every write is captured with before and after state. The dashboard can render side-by-side HTML and SVG previews, line-by-line diffs, and raw source comparison for exact inspection.

Reporting

After Action Reports

Full mission records include event timelines, input and output manifests, findings, file artifacts, evaluation scores, and replayable trace lineage. Export the whole operation as structured JSON.

Escalation

Questions route back to the human

When ambiguity matters, agents ask instead of hallucinating. Question escalation preserves the agent context, waits for a human answer, and resumes the assignment with that guidance incorporated.

Composition

Provider-agnostic models and composable skills

The orchestration layer works across OpenAI, Anthropic, Google, Azure, and local Ollama setups. Skills like git inspection, HTTP access, self-critique, and MCP bridges can be combined per assignment.

  • Model choice per mission or per role
  • 12-skill catalog with reusable profiles
  • Structured tool boundaries instead of free-form tool drift

Small components, explicit traces, no hidden work.

A.O.N. follows the same disciplined posture as the wider Helix stack: narrow responsibilities, inspectable boundaries, and software that shows its work instead of burying it.

6
Agent classes
Parallel agents
<300
Lines per file
100%
Auditable traces

Intake

Orchestrator handles mission intake, lifecycle control, and mission-level decisions. Manifest Builder structures objectives, constraints, and context for each agent assignment.

Execution plane

Work Plan Executor fans assignments out to the squad. Agent Runner manages the conversation loop, tool execution, and output generation. Tool Registry and Skills Registry keep role-specific capabilities bounded and composable.

Evidence layer

Result Collector gathers outputs and findings, while the Event Store keeps an immutable record for replay. The Dashboard renders mission state, AARs, diffs, and operator approvals.

Human loop

Question Watcher surfaces agent uncertainty to the human with context intact, then routes the answer back to the waiting assignment so the mission can continue without inventing missing information.

Where agent squads pay for themselves.

A.O.N. is suited to work that spans many files, many checks, or many specialist viewpoints and still needs a human to trust the output.

Codebase audits

Point A.O.N. at a repository and ask for security, quality, or architecture review. Critic agents fan out per file and synthesis consolidates the findings into one report.

  • Per-file parallel analysis
  • Severity-graded findings
  • Synthesized cross-file report
  • Before and after fix previews

Large-scale refactoring

Rename a product across dozens of files or migrate from one framework to another. Builder agents make coordinated changes while the orchestrator keeps consistency visible.

  • Coordinated multi-file changes
  • Constraint-aware consistency
  • Visual diff of every modification
  • Quality score gating

Test generation

Tester agents inspect modules in parallel and generate regression surfaces that map directly to the source behavior.

  • Per-module test generation
  • Coverage-aware grading
  • Structured test reports
  • CI handoff readiness

Architecture review

Architect and Critic roles can combine code-level analysis with system design feedback to surface coupling, dependency, and SRP issues before they grow.

  • Design pattern analysis
  • Dependency mapping
  • SRP violation detection
  • Actionable remediation plans

Documentation generation

Use Builder agents to draft docs and Critic agents to verify them against implementation so the generated output is not detached from the real code.

  • Code-to-docs pipeline
  • Accuracy verification via Critic
  • Consistent format enforcement
  • API reference generation

Continuous agent operations

Honest grading today becomes the prerequisite for self-correction tomorrow. A.O.N. is built to support looped operations where trust accumulates through evidence rather than mystery.

  • Self-grading before self-correction
  • Trust scores across missions
  • Human-in-the-loop when confidence drops
  • Full audit trail for every decision

Pilot inspectable multi-agent execution.

A.O.N. is in development alpha for teams that need agent work to be observable enough for real operational use. Start the conversation, then decide where orchestration should sit in your workflow.