Documentation · v0.1
Everything you need to install STORM, define your first agent, and ship it to production. Boring, composable, honest — start anywhere below.
STORM is a single primitive — install, configure, deploy. No orchestrator, no SDK soup, no vendor lock-in.
Add storm to your project with one command.
STORM ships as a single package with zero peer dependencies. It runs anywhere JavaScript runs — Node, Bun, Deno, Cloudflare Workers, or your own host.
bun add @storm/core
Spin up your first agent in under five minutes.
Define a signal source, a scoring policy, and an action handler. STORM wires them into a cycle and starts logging. That's the whole tutorial.
import { storm } from "@storm/core";
storm({
signals: ["http"],
policy: (s) => s.confidence,
action: async (s) => console.log("fire", s),
}).start();Signals, policies, sizing, memory — wired in one block.
Every knob lives in a single config object. Sensible defaults out of the box; override only what you need. Configuration is plain data — diff it, version it, replay it.
Four primitives, one cycle. Everything else is a composition.
HTTP, webhooks, streams, and model outputs as a typed event bus.
Signals are the only way data enters a STORM cycle. They are typed, timestamped, and addressable. Build your own source by implementing a single async iterator.
Score events deterministically and gate actions by confidence.
A policy is a pure function from signal to score. Determinism is enforced — no Date.now(), no Math.random() unless seeded. This is what makes replay possible.
Execute on-chain, off-chain, or shell — with adaptive sizing.
Handlers are the only place side effects happen. STORM gates every handler call with the score from the policy and the size from the sizer. Failed handlers are retried with exponential backoff and logged either way.
Immutable, replayable cycle records.
Every cycle writes one log entry containing the signal, score, action, and result. Logs are append-only and content-addressed. Replay any window of history bit-for-bit.
The runtime is small on purpose. Bring your own host.
Persistent, replayable cycles. Every decision is auditable.
Memory is a key-value store scoped per cycle. Use any backend that exposes get/set — Redis, SQLite, Cloudflare KV, or the in-memory default for tests.
Structured logs, traces, and metrics out of the box.
STORM emits OpenTelemetry-compatible spans for every cycle. Pipe them into your existing stack — Datadog, Honeycomb, Grafana, or just stdout.
Runtime-agnostic — Node, Bun, Workers, or your own host.
There is no STORM runtime to install. Your handler runs in your process. We ship adapters for the common hosts and a 40-line example for everything else.
Composable shapes that show up over and over in real systems.
One signal, many handlers, one log entry per action.
Multiple handlers on a single cycle are first-class. Each one is gated, sized, and logged independently. Use this for mirrored execution or shadow rollouts.
Re-run any window of history against a new policy.
Point STORM at a log range and a new policy. It replays every signal, scores it under the new rules, and produces a delta. No production traffic required.
Signals queue, scores throttle, handlers stay honest.
When a downstream is slow, STORM applies backpressure at the signal layer rather than dropping work or blocking the loop. Configurable per source.
The full surface area of @storm/core. Small on purpose.
Construct an agent. Returns a controllable cycle runner.
Accepts a config object describing signals, policy, action, sizing, and memory. Returns an instance with .start(), .stop(), .replay(range), and .snapshot().
const agent = storm({
signals: [http({ port: 8080 }), webhook("/x")],
policy: (s) => model.score(s),
action: async (s) => handler.fire(s),
sizing: confidence({ floor: 0.7 }),
memory: postgres(process.env.DB_URL),
});
await agent.start();Re-run any window of cycle history against the current policy.
Returns a stream of synthetic cycles with the would-be score and action for each historical signal. Use it to validate policy changes against real traffic before shipping.
const delta = await agent.replay({
from: "2025-04-01",
to: "2025-04-08",
});Capture full agent state for cold-start or migration.
Returns a serialisable snapshot — config hash, memory contents, in-flight signals. Restore on a new host with storm.restore(snapshot).
Three agents, three domains, the same primitive.
Fan out four specialist agents and synthesize a structured verdict.
The reference swarm shipped with this site: streams Researcher, Skeptic, Quant, and Strategist deliberations in parallel, then a Coordinator agent returns verdict + confidence + key risks via a tool call. See it running on /app.
Score incoming webhooks and route to the right on-call handler.
A 60-line example showing how STORM replaces a typical PagerDuty + workflow tool stack. Confidence gating, adaptive cooldown, full replay.
Score model outputs as signals; only fire emails above 0.85.
Wire any chat completion as a signal source. The policy thresholds the score; the handler is your transactional email provider. Hallucinated drafts never leave the cycle.
Trust boundaries are explicit. The runtime never owns secrets.
Stored in your environment. STORM never serialises them into logs.
Logs redact known secret-shaped values by default. Bring-your-own scrubber for custom formats.
Replay runs handlers in a sandbox by default — no real side effects.
Side-effecting handlers see a stub network and stub clock during replay. Opt in per-handler if you want a real-money replay (you usually don't).
Every cycle is content-addressed. Tampering is detectable.
Logs form a hash chain. A modified entry invalidates every later hash. Verify offline with storm verify <log-file>.
No. LLMs are just one kind of signal source.
STORM is a runtime for autonomous cycles. Plug an LLM in as a signal or a policy if you want — but the standard does not assume one.
No. The cycle is the orchestrator.
If you want to chain cycles, have one cycle's action emit a new signal that another cycle subscribes to.
v0.1 is stable for the core cycle. Adapters are evolving.
We run STORM ourselves at six figures of cycles per day. Public APIs follow semver from v1.0 onward.
Different abstraction. STORM is a runtime, not a chain DSL.
Chain frameworks compose calls. STORM composes cycles. You can build a chain on top of STORM in a few lines, but the inverse is much harder.
STORM is open source and stewarded by a public working group. Spec changes follow an RFC process. The reference implementation is MIT.
Reference implementation, issues, and discussions. Open a PR against main.
Propose, discuss, and ratify spec changes in the open. All RFCs are versioned.
Use it, fork it, ship it. No CLA, no strings, no vendor.
Read the whitepaper for the protocol, runtime contract, and conformance rules.