Skip to main content

Hello Fleet Tutorial

This guided path takes you from a clean clone to a full Trust Mesh evidence run:

  1. Running the local FleetForge demo.
  2. Submitting the "Hello Fleet" DAG.
  3. Capturing capability tokens.
  4. Exporting attestations, AIBOM metadata, and C2PA manifests (with a stub SCITT entry when running the Enterprise tier).
  5. Verifying the entire evidence chain with one fleetforge-ctl verify.
  6. Replaying deterministically and exercising a guardrail.

Follow the steps in order—each builds on the previous one.

What you’ll do
  1. Run the Hello Fleet demo, watch the job finish, and point reviewers to the receipt in /demo.
  2. Verify that receipt from the CLI (fleetforge-ctl receipt or fleetforge-ctl verify) so they see signed evidence.
  3. Replay the same run with fleetforge-ctl replay <RUN_ID> to prove deterministic behaviour and highlight drift.

This tutorial underpins the Status & Acceptance checklist. When you need to prove capability tokens, attestations, C2PA manifests, or replay evidence, extend this single workflow instead of minting new demos.

DEMO (show-me checklist) — Keep these three commands ready after every run:

fleetforge-ctl receipt --run-id <RUN_ID> \
--manifest build/receipts/<RUN_ID>.c2pa.json
fleetforge-ctl verify tmp/hello-fleet/artifacts/demo-output.bin \
--manifest tmp/hello-fleet/artifacts/demo-output.bin.c2pa.json
fleetforge-ctl replay <RUN_ID>
Canonical adapter

Hello Fleet now standardises on the LangGraph agent_team scenario under examples/_packs/demo-pack/agent_team. The UI, CLI, and docs all reference that single run so reviewers get one deterministic storyline.

Need AutoGen or CrewAI instead?

Stick with LangGraph for acceptance. AutoGen and CrewAI stay supported but ship off by default—pass --adapter autogen or --adapter crewai only when a partner insists, then link evidence back to the LangGraph run before touching this tutorial again.

Need to show a guardrail denial?

Use the optional agent_team_openai preset:

cargo run -p fleetforge-ctl -- submit \
--file examples/_packs/demo-pack/agent_team_openai/run_spec.json

Its engineer budget is intentionally tight, so the guardrail emits a budget_exceeded receipt. Mention this variant explicitly when you run it; the canonical “Hello Fleet” acceptance slice still references agent_team.

Prerequisites

  • Rust (stable toolchain)
  • Node.js 20+ with pnpm
  • Docker (Desktop or Engine) with Compose
  • Set TRUST_MESH_ALPHA=1 so spans and receipts include trust attributes.

Clone the repository and enter it:

git clone https://github.com/fleetforgedev/fleetforge.git
cd fleetforge

1. Start the demo stack

Launch the runtime, UI, and dependencies with the bundled recipe:

just demo

The command builds the toolbox image, runs Postgres, applies migrations, starts fleetforge-runtime, and boots the Next.js console at http://localhost:3000/demo.

2. Submit the Hello Fleet run

In the demo UI, click Run Demo. This submits the LangGraph agent_team DAG (examples/_packs/demo-pack/agent_team/dag.json) and returns a run_id. Mirror the same path from the CLI:

cargo run -p fleetforge-ctl -- submit \
--file examples/_packs/demo-pack/agent_team/run_spec.json

Copy the printed RUN_ID immediately—this value is what you will paste into fleetforge-ctl receipt --run-id <RUN_ID> during interviews so reviewers see the exact same receipts as /demo. Watch the run finish in the UI; the run timeline and tap viewer stream live events from the runtime.

3. Capture capability tokens and trust spans

Retrieve the run snapshot and confirm every step carries a capability_token_id.

cargo run -p fleetforge-ctl -- get <RUN_ID> \
| jq '.steps[] | {id, capability_token_id: .output_snapshot.capability_token_id}'

Tokens must be present for all tool/LLM/agent steps; a missing ID indicates the guard blocked execution.

Surface attestation + capability IDs in the UI

The demo view now mirrors the spans and receipts in real time:

  1. In /demo, click the Copy cURL button; the generated request includes the run_id you just submitted. Paste it into a terminal to show reviewers the API contract if needed.
  2. Scroll to the Receipts panel. Every row shows:
    trust.attestation_id: 5ec8dfd0-bf8f-4611-9b7e-61a39f64f0b1
    trust.capability_token_id: cap_demo-42af
    These are the same IDs emitted in OpenTelemetry and exported via fleetforge-ctl receipt.
  3. Open your tracing backend (or the built-in /demo span viewer) and filter for trust.capability_token_id. You should see the IDs match the CLI output.

4. Export attestations, BOM metadata, and manifests

Use the audit exporter to pull the run’s attestation envelopes, CycloneDX AIBOM, and generated C2PA manifest. Enable the transparency writer if you want a stub SCITT entry in the bundle:

export FLEETFORGE_LICENSE_TIER=enterprise
export FLEETFORGE_TRANSPARENCY_BACKEND=local
export FLEETFORGE_TRANSPARENCY_WRITER=1
cargo run -p fleetforge-ctl -- audit export \
--bundle-dir tmp/hello-fleet \
--limit 200

The tmp/hello-fleet/ directory now contains a bundle.json file listing the attestation IDs plus per-artifact blobs (output binaries, C2PA manifests, and transparency metadata when the writer is enabled). Use those files—especially the <artifact>.bin + <artifact>.bin.c2pa.json pair—for verification. If you are running the OSS or Pro tier, skip the license and transparency vars; the exporter still emits the attestations/C2PA manifests but omits SCITT receipts.

5. Verify the entire evidence chain

Run the single verification command against the artifact; it validates the C2PA manifest, resolves the referenced attestation IDs from the runtime, verifies the capability token chain, and (optionally) checks the SCITT receipt. Replace the placeholder filenames as needed. Include --transparency whenever the bundle contains *.scitt.json metadata; otherwise omit the flag.

cargo run -p fleetforge-ctl -- verify \
tmp/hello-fleet/artifacts/demo-output.bin \
--manifest tmp/hello-fleet/artifacts/demo-output.bin.c2pa.json \
--transparency tmp/hello-fleet/artifacts/demo-output.bin.scitt.json

Successful output shows:

  • Manifest validation succeeded plus the recorded SHA-256 digest.
  • Each attestation ID resolved with its predicate type and verified capability token (tool/scope/budget details are printed inline).
  • SCITT receipt verified … when a local or remote transparency writer generated metadata; otherwise the CLI reports that transparency was skipped.

Need an offline check? Use the new inspector:

cargo run -p fleetforge-ctl -- c2pa inspect \
--artifact tmp/hello-fleet/artifacts/demo-output.bin \
--manifest tmp/hello-fleet/artifacts/demo-output.bin.c2pa.json \
--transparency tmp/hello-fleet/artifacts/demo-output.bin.scitt.json

This command pretty-prints the manifest profile, policy assertions, identity subject, and capability chain without calling the runtime.

Example (abbreviated) output:

Trust chain:
artifact_sha256: 4e8b…
- 5ec8dfd0-bf8f-4611-9b7e-61a39f64f0b1
subject: step:2a76…#planner
predicate_type: https://slsa.dev/provenance/v1
capability_token_id: 9b3c…
capability_subject: run 6af5… step 2a76…
scope.tool.name: demo.echo
scope.schema.hash: 2157…
scope.budget: tokens=Some(2000) cost_usd=Some(0.05) duration_ms=None
SCITT receipt verified
entry_id: run:6af5…/attestations
backend: local (stored)
Verification complete: 2 attestation(s) and the manifest are trusted.

6. Replay the run deterministically

Replays guarantee the same step ordering and outputs when you reuse the seed.

cargo run -p fleetforge-ctl -- replay <RUN_ID>

Compare the replay in the Runs Explorer. The diff view should show no changes between the live run and the replayed run.

7. Add a guardrail and trigger a policy catch

  1. Open examples/_packs/demo-pack/agent_team/dag.json.
  2. Locate the step with a budget block and lower the allowance:
    "budget": { "tokens": 10, "cost": 0.0001 }
  3. Resubmit the run (UI or CLI). The runtime fails the step with kind = budget_exceeded.
  4. Inspect the run detail view and the stored policy_decisions artifact to see the guardrail denial.

8. Restore and continue

Reset the budget values, resubmit the run, and verify it succeeds. You now have a repeatable workflow that produces the capability tokens, attestation IDs, C2PA manifest, SCITT receipt (when enabled), replay evidence, and guardrail coverage expected by the Status & Acceptance checklist.

Next steps

  • Deploy the stack to Kubernetes with the Helm how-to.
  • Harden access with OIDC and guardrail packs.
  • Explore observability with ClickHouse & Grafana.
  • Produce bespoke manifests with fleetforge-ctl c2pa sign --profile <basic|policy-evidence|full> --artifact ... when you need to re-sign artifacts outside the runtime.