AI inference
from orbit to ground.
Run models on the bus, at the dish, and inside facilities that will never route to the cloud. One platform across every link in your constellation.
[ The Constraint ]
The downlink is the bottleneck.
Every byte has a cost. Satcom is slow, contested, or metered. The data your operators need lives where the link is weakest. On the bus between passes. At the dish while the ground segment is under load. Inside a facility that will never route to the cloud.
The inference stack assumes a warm GPU in a hyperscaler region. None of that is true for space. We move inference to where the data already is.
[ Deployment Environments ]
Four environments. One platform.
Same Runtime. Same Hub. Same API. Across every link your fleet has.
01
Inference on the bus
Jetson-class compute on the spacecraft. Tiered memory keeps models running cold-boot stable. Uplink new weights over the link you have. Onboard decisions happen between passes.
02
Process at the dish
Run analytics where the data lands. Triaged insights move to the analyst workflow. Raw telemetry stays on site. Bandwidth budgets recover and the ground segment stops being the bottleneck.
03
Inside the perimeter
Install from media. Zero egress. Identity via your IdP, audit trails into your SIEM. Classified payloads stay inside the perimeter. No new vendor ingress to justify.
04
One Hub, many sites
Canary deploys to one ground station, rollback on failure, promote to the rest when clean. Hot-swap models, rotate credentials, export compliance reports. No SSH into individual machines.
[ Day One ]
What changes when inference moves to the edge.
Fig 2.1
Downlink insight, not telemetry
Inference at the dish turns petabytes into kilobytes. The ground segment stops being rate-limited. You send decisions, not data.
Fig 2.2
Bigger models on the hardware you have
Tiered memory orchestration runs model classes that should not fit. One platform from a single Jetson to a rack of H100s.
Fig 2.3
Classified payloads stay classified
Air-gapped, ITAR-aware, zero egress. Deploy inside the perimeter you already have. No new attack surface.
[ Forward-Deployed ]
Audit. Install. Harden.
Our engineers sit alongside your team. They audit your hardware, install the platform on your ground segment, benchmark against your actual workloads, and harden it for production. Fixed scope. They leave when it runs.
01
Site audit
Remote or on-site review of the hardware on the bus, at the dish, and in the facility. Deployment plan written and signed off before installation begins.
02
Install and benchmark
Runtime and Hub on your infrastructure. Throughput, latency, and cost measured on your actual hardware. You get a scorecard, not a slide deck.
03
Harden for production
Supervisor restart, secrets management, IdP integration, audit trails into your SIEM. Security posture locked to your compliance regime. Production-ready before we close the ticket.
Deploy in your environment.
Tell us your hardware, link profile, and perimeter constraints. We will show you what is possible.