/ Orchestration

Connect agents into production pipelines.

AIRMY Orchestration lets you build, deploy, and monitor directed agent pipelines — from simple chains to complex DAGs with conditional routing, parallelism, and error handling.

Input
——>
Classifier Agent
——>——>
Data Analyst Agent
Report Writer Agent
——>——>
Output Merger
——>
Output

01 / Visual Pipeline Builder

Draw your pipeline. Export as code.

Drag-and-drop canvas for connecting agents into sequential, parallel, and conditional flows. When you're done designing, export the entire pipeline as YAML or Python SDK code — ready to commit and deploy.

  • Drag-and-drop agent nodes
  • Export to YAML or Python
  • Version-controlled pipeline configs

Pipeline Config

pipeline:
  name: revenue-analysis
  steps:
    - id: classify
      agent: classifier-v2
      output: intent

    - id: analyse
      agent: data-analyst-v3
      depends_on: [classify]
      condition: "intent == 'financial'"

    - id: report
      agent: report-writer-v1
      depends_on: [analyse]

Parallel Execution

data-analyst-v3    completed 48ms
report-writer-v1   completed 91ms
output-merger      waiting for join
Wall-clock time91ms (vs 139ms serial)

02 / Parallel Execution

Run agents simultaneously.

Run independent agent tasks simultaneously. AIRMY auto-joins when all branches complete. For complex workflows with multiple independent sub-tasks, parallel execution can cut wall-clock time by 60% or more.

03 / Conditional Routing

Route by output, confidence, or custom logic.

Route between agents based on output content, confidence scores, or any custom predicate. Build classification funnels, escalation paths, and fallback chains — all without writing routing logic yourself.

  • Output-based routing expressions
  • Confidence threshold branching
  • Default / fallback path

Routing Rule

route:
  condition: output.confidence < 0.8
  if_true:
    next: human-escalation
  if_false:
    next: auto-responder

Retry Policy

retry:
  max_attempts: 3
  backoff: exponential
  initial_delay: 1s
  on_failure:
    action: dead_letter_queue
    notify: slack

04 / Error Handling & Retry

Resilient by default.

Per-step retry policies with exponential backoff and configurable max attempts keep transient errors from killing your pipeline. Failed tasks go to a dead-letter queue for inspection, and you can manually resume from any checkpoint.

05 / State & Context Passing

Type-safe data flow between agents.

Pass structured outputs between agents. Transform, filter, and enrich data at each step. Full JSON Schema validation ensures the right data shape at every stage — with clear error messages when something doesn't match.

Context Schema

output_schema:
  type: object
  properties:
    summary:
      type: string
    confidence:
      type: number
      minimum: 0
      maximum: 1
    tags:
      type: array
      items: { type: string }

Pipeline Trace

0mspipeline started
2msclassify → input validated
48msclassify → completed (intent: financial)
49msanalyse, report → started in parallel
140msoutput-merger → completed

06 / Pipeline Monitoring

Full observability into every run.

Each pipeline run is a traceable span. See exactly where time was spent, which agents were called, what data flowed through each step, and where failures occurred — all in a waterfall view in the monitoring dashboard.

Build your first pipeline today.

From a simple 2-agent chain to a full DAG — AIRMY handles the wiring.