Product

StørmAI: event-time inference plane

GPU inference plane that turns deterministic features into versioned probabilistic signals.

Not a chatbot or LLM wrapper and not an enforcement engine; it emits inference objects only.

Pipeline role

Inference outputs you can enforce

StørmAI consumes deterministic feature vectors from StørmPrep, executes probabilistic inference on GPU, and emits versioned inference objects to StørmDecision and StørmVault. It does not apply policy or enforcement actions.

Determinism boundary: deterministic features in, probabilistic scores out.
Micro-batching and priority lanes for bounded latency targets.
Signed model packages and routing rules verified before execution.
Inference outputs carry provenance and are sealed in StørmVault.
gpu inference interface

Contract: inputs → outputs

Inputs from StørmPrep, GPU inference processing, and versioned outputs.

Inputs

Deterministic feature vectors and context snapshots from StørmPrep.

Processing

GPU micro-batching with priority lanes and bounded scheduling.

Outputs

Versioned inference objects with provenance to StørmDecision and StørmVault.

How it works

Three steps from governed models to sealed inference outputs.

Load governed model

Signed model packages and routing policies are verified before execution.

Micro-batch inference

GPU micro-batching with priority lanes for bounded latency tiers.

Emit + seal outputs

Inference objects go to StørmDecision and are sealed in StørmVault.

Interfaces

Interfaces

  • Inputs: deterministic feature vectors from StørmPrep.
  • Outputs: versioned inference objects with confidence and provenance metadata.
  • Contracts: model/version governance and reproducibility guarantees.
  • Failure semantics: backpressure and bounded degraded mode.
stormai interfaces

Capabilities

Operational behavior and evidence outputs for the inference plane.

Deterministic inputs

Deterministic features in

StørmAI consumes canonical events and deterministic feature vectors from StørmPrep with schema versions and context snapshots. So what: inference is reproducible for the same inputs and schema version.

deterministic feature inputs
latency posture and prioritization
Latency posture

Micro-batching and prioritisation

Small, time-bounded batches preserve GPU efficiency while priority lanes and reserved capacity protect enforcement-critical streams. So what: latency remains bounded for critical tiers under load.

Evidence artefacts

Versioned inference outputs

Inference objects include model id, feature schema version, configuration, and confidence bounds, and are sealed in StørmVault. So what: outputs are auditable and attributable.

inference provenance dashboard
Integrations

Model and GPU integrations

StørmAI integrates with signed model registries and GPU inference pools.

  • Signed model packages and routing policies verified by StørmTrust.
  • GPU runtime and scheduler telemetry for capacity planning.
  • Compatibility checks for model, schema, and hardware versions.

So what: model execution stays governed while hardware constraints are visible.

Operational guarantees

Operational guarantees

  • Bounded latency targets with micro-batching and priority lanes.
  • Reserved capacity for enforcement-critical streams.
  • Deterministic feature dependency on StørmPrep outputs.
  • Signed model provenance verified before execution.
operational guarantees

What StørmAI will not allow

Hard boundaries that protect downstream policy decisions.

Enforcement decisions

StørmAI never issues actions; decisioning remains in StørmDecision.

Unsigned or incompatible models

Unverified model packages or schema mismatches are rejected.

Unbounded backlog

Priority lanes and caps prevent low-value workloads from starving critical inference.

Works with

Upstream preprocessing, downstream decisioning, and evidence sealing.

Not a solution page

StørmAI is a component, not a chatbot or LLM wrapper, and it does not perform enforcement. For solution outcomes, see AI Defense.

FAQ

Clarifying what StørmAI does and does not do.

Is StørmAI the decision engine?

No. It emits inference objects; StørmDecision applies policy and forms bounded decisions.

Does StørmAI perform enforcement?

No. Enforcement is executed by StørmControl using decision objects.

How is inference provenance captured?

Each inference object includes model id/version, feature schema version, and signed provenance sealed to StørmVault.

Request a StørmAI demo.

Review inference contracts, latency posture, and evidence artefacts.