HAUMEA
MODULE 1
Sensor & Capture
ORIN orin@haumea — onboard
PWR— W
CPU— %
MEM— MB
STO— GB
§ 01 · PROBLEM

Three structural limits — what changes in orbit.

High-stakes regions still lack frequent, automated insight into what's growing — and the pipeline that should deliver it is constrained at every step from acquisition to analyst.

Lack of real-time information

High-stakes regions lack real-time insight on farmlands — making operational decisions always reactive rather than proactive. The data exists; the timing doesn't.

Limitation in bandwidth

A single multispectral tile is ~1 GB over X-band. Bandwidth forces a permanent trade-off between coverage, resolution, and timeliness.

High latency on insights

Ground-side cloud processing adds hours to a full day from capture to analyst — too slow for disease, storm damage, or sudden-stress response.

§ 02 · SOLUTION

A modular, lightweight E2E AI in orbit.

Raw multispectral imagery → actionable agricultural insight before the next ground contact. Three engineering commitments hold the proposal together.

DEPLOY ANYWHERE
1.2 MB
Total INT8 model footprint. Fits every CubeSat — not just flagship platforms. F1 0.99 crop · 0.96 rice phenology.
PER-POINT LATENCY
~3 ms
Jetson Orin Nano TensorRT INT8. 5–8 W active draw within the 7 W envelope.1 No specialized space silicon required.
DATA REDUCTION
10⁵× less
~1 GB raw capture → ~5 KB downlink. Decides what matters in orbit. Not compression — intelligence.
02.A · MODULAR

Scalable · hot-swappable · maintainable

  • Six independent modules with stable I/O contracts — uplink-replaceable
  • Checkpoints, AOI definitions, and feature recipes updated via command uplink
  • Effortless uplink: INT8 deployment fits a single S-band pass
02.B · HETEROGENEOUS

Right work on the right silicon

  • CPU: I/O, GeoTIFF parsing, feature engineering, JSON packaging
  • Ampere GPU (1024 CUDA + 32 Tensor Cores): both forward passes via TensorRT INT8
  • Concurrent CUDA streams — LTAE and CNN-LSTM run in parallel
02.C · FAULT-TOLERANT

Three integrity layers, every cycle

  • Layer-aware redundancy on SEU-sensitive weights (RedNet)2
  • INT8 weights — 4× fewer vulnerable bits than FP32
  • Radiation-hardened ROM reload on checksum mismatch
§ 03 · PIPELINE

From acquisition to insight, in six steps.

Each module cuts the data before handing off to the next. ~1 GB enters; ~5 KB leaves. The satellite becomes a filter.

THE ONBOARD DATA FUNNEL — gigabytes to kilobytes between capture and downlink
~1 GB
Raw acquisition
~150 MB
AOI cropped
~100 KB
Point extraction
~5 KB
insight
01–02 · CAPTURE & FILTER

Acquire, calibrate, gate

  • 12-band multispectral imager on 3U/6U CubeSat — 100×100 km swath
  • L0 → L1C calibration + cloud screening; >30% cloud → discard
  • ~40% of seasonal compute saved by rejecting cloudy passes early
03–04 · EXTRACT & REMEMBER

Points, not pixels

  • AOI crop + DEM orthorectification: 1 GB → 150 MB
  • 5×5 × 12-band neighborhood per point: 150 MB → ~100 KB
  • NVMe ring buffer (~50 MB) accumulates 15–25 obs/point/season — fires at ≥3 obs
05–06 · REASON & DELIVER

Infer in parallel, downlink compact

  • INT8 LTAE + CNN-LSTM, parallel CUDA streams — ~3 ms combined per point
  • ~5 KB JSON payload over S-band, <5 s transmit; X-band hardware not required
  • Priority queue elevates anomalies for next ground contact
§ 04 · INTELLIGENCE

Two models, decoupled for redundancy and scalability.

LTAE asks "what crop?" — order-agnostic. CNN-LSTM asks "where in season?" — direction matters. Concurrent CUDA streams on Orin Nano · combined ~3 ms / point TensorRT INT8.

§ 04.B

4.28 MB → 1.22 MB. Zero accuracy loss.

FP32 → FP16 → INT8 staged compression. Lean uplink AND lean inference — 3.5× reduction, validated at every step on full and hard-confidence subsets.

§ 05 · ORBIT

Built to survive years in orbit.

Three constraints ground-based ML never faces — power, radiation, and bandwidth — wrapped by a coordinated defense posture and a measured resource fingerprint.

POWER
5–8 W
Active inference
Jetson Orin Nano 7 W TDP1
LATENCY
~3 ms
Per-point, LTAE + CNN-LSTM
TensorRT INT8 on Orin Nano
PIPELINE
~36 s
End-to-end per acquisition
Inference is <1% of runtime
MODEL SIZE
1.2 MB
INT8 deployment candidate
vs. 600 MB+ foundation models
Resource Fingerprint · Jetson Orin Nano 8 GB
ResourcePeakNote
System RAM~500 MB6.3% of 8 GB
CPU (6-core A78AE)60–80%Feature-engineering bound
GPU (Ampere)15–25%Sub-second burst · headroom
NVMe storage~12 GBof 64–256 GB available
ROM~15 MBRadiation-hardened
Active power5–8 Wwithin 7 W TDP1
End-to-End Execution Timing · ~200 points simulated
StageTime
Radiometric calibration2.0 s
Cloud screening0.5 s
AOI crop + DEM ortho3.0 s
Point extraction200 ms
Feature engineering30.0 s
Dual inference0.5 s
JSON packaging99 ms

Three coordinated radiation defenses

Hundreds of bit-flips/day in LEO2 · three independent layers wrap every inference cycle.

05.A · LAYER-AWARE

RedNet selective redundancy

  • SEU-sensitive layers protected; others run lean2
  • Validated on Chaohu-1 SAR · Jetson Xavier NX
  • 8.4–33% inference speed-up at negligible memory overhead
≈ 0 residual error rate
05.B · PASSIVE DEFENSE

Quantization-induced robustness

  • INT8 weights are ¼ the bit-width of FP32 — proportionally less radiation surface area
  • Compression pipeline doubles as radiation defense at zero added cost
  • Free win from deployment design
4× fewer vulnerable bits
05.C · WATCHDOG

Periodic rad-hard ROM reload

  • Reference checkpoint stored in radiation-hardened ROM
  • Per-cycle checksum verification; reload on mismatch
  • Bounds drift between operator revisits
100% ROM-verified weights

Built for space, not retrofitted

ConstraintGround-based MLHaumea (Orbit)
Power30–60 W cloud GPUs5–8 W · Orin Nano 7 W1
Radiation~0 bit-flips/day100s/day → 3 active defenses2
AutonomyAlways-on cloudEclipse cycles, no operator
BandwidthEffectively unlimited~10⁵× reduction
§ 06 · IMPACT

One architecture. Many missions.

Same pipeline, swap weights — Haumea generalizes across mission domains. Three decision categories where same-orbit insight changes the outcome, plus three integration patterns into real-world decision pipelines.

Modular scalability · adaptation needs new weights, not a new system

MissionAdaptationPipeline modules
ForestryRe-train weightsUnchanged
Water resourcesRe-train + spectral indicesModules 1–4, 6 unchanged
Urban observationRe-train + AOI DBModules 1–4, 6 unchanged
Disaster responseRe-train + trigger logicModules 1–4, 6 unchanged

Footprint differentiator — Three-Body Computing Constellation3 runs an 8B-param foundation model (~600 MB) on flagship platforms. Haumea is 1.2 MB · 2,000× smaller · fits every CubeSat in a constellation.

Where real-time crop intelligence pays off

Three decision categories where same-orbit insight changes the outcome.

USE CASE · 01

Anomaly response

Disease, drought stress, storm damage, pest invasion. Flagged on the pass that observed it — queued HIGH-PRIORITY for the next ground contact.

Days → hours latency
USE CASE · 02

Operational scheduling

Irrigation, fertilizer windows, harvest readiness. Per-pass phenology tracking triggers on the actual transition — not on the calendar.

+3–7 day lead time
USE CASE · 03

Strategic monitoring

Crop type maps, yield forecasts, regional productivity. Continuous in-orbit classification at constellation scale.

Annual → continuous

Anchored in real demand — NE China, US Midwest, Ukraine wheat belt, Brazil soybean face the same challenges, the same need. The 1.2 MB footprint makes per-region deployment economical.

How Haumea plugs into real-world decision pipelines

The orbital filter lets downstream systems act directly on insights — saving time and effort.

INTEGRATION · 01

Autonomous field equipment

Georeferenced insights via GNSS (BeiDou, GPS, Galileo) at cm accuracy. Sprayers, tractors, irrigation — zero manual routing.

Precision ag deployments globally
INTEGRATION · 02

Analytics & market platforms

Insights feed markets, governments, and insurers. Fits any structured-data pipeline.

USDA · EU CAP · yield forecasting
INTEGRATION · 03

Emergency response

Anomalies routed to field teams in near-real-time. Disease, storm, stress — rapid intervention.

FAO locust watch · crop insurance · ERNs

Precedent — on-orbit AI is already operationally viable

Three-Body demonstrated that real-time on-orbit EO analysis is operationally viable — identifying stadiums and bridges through heavy snow cover across NW China.4 Haumea extends the same workflow class, multi-region deployable, but 2,000× smaller — tuned for specific high-value tasks, and fits every CubeSat in a constellation.

§ 07 · ROADMAP

Three horizons to operational service.

From validated INT8 deployment to persistent agricultural intelligence in orbit.

2026 – 27

Validate on flight hardware

INT8 on Orin Nano. Hardware-in-the-loop testing — power, thermal, radiation, pipeline timing. Bench-validated radiation strategy.

Payload reference design for CubeSat
2027 – 29

First orbital demonstration

Hosted payload on AI-capable CubeSat. Demo over target agricultural region. Telemetry-validated accuracy and latency.

Published in-orbit performance metrics
2030 +

Operational service

Multi-constellation, multi-region, multi-mission. Agencies, platforms, insurers — daily-revisit cadence.

Continuous crop intelligence as a service

Technical improvement paths

Four parallel directions for technique evolution.

01

Foundation-model distillation

Distill from models like Prithvi-EO5 / RemoteCLIP into the compact dual architecture. Improved zero-shot generalization to new geographies.

02

Multi-modal fusion (S1 + S2)

Add Sentinel-1 SAR for cloud-penetrating coverage — critical for regions like NE China during summer cloud season.

03

INT4 weight quantization

If needed, push quantization further to maximize efficiency and improve resource requirements on Orin Nano Tensor Cores.

04

Federated constellation learning

Each satellite contributes regional gradient deltas via inter-satellite laser links — models updating weights in orbit.

From HAUMEA.

§ · REFERENCES

References.

Five external anchors underpin every measured claim and external comparison in this proposal.

  1. 01
    Rey, F. et al. (2025). A Performance Analysis of YOLO Models for Deployment on Constrained Computational Edge Devices in Drone Applications. Electronics (MDPI).
    Jetson Orin Nano power anchor — 7.4–8.7 W active across precision levels.
    arXiv:2502.15737 ↗
  2. 02
    Wang, M. et al. (2024). A Case for Application-Aware Space Radiation Tolerance in Orbital Computing (RedNet). Tsinghua University et al.
    Validated on Chaohu-1 SAR satellite payload via NVIDIA Jetson Xavier NX. Documents hundreds of bit-flips/day in COTS hardware in LEO.
    arXiv:2407.11853 ↗
  3. 03
    ADA Space / Zhejiang Lab (May 2025). First launch of Three-Body Computing Constellation — 12 of 2,800 planned satellites, launched 14 May 2025 from Jiuquan via Long March 2D.
    First operational AI computing constellation in orbit.
    SpaceNews ↗
  4. 04
    Xinhua (Feb 2026). China demonstrates AI computing power in outer space with satellite network breakthrough.
    Confirms 8B-parameter remote-sensing model performed infrastructure census across 189 km² of NW China (Nov 2025). Constellation target ~100,000 PFLOPs.
    Xinhua ↗
  5. 05
    IBM / NASA. Prithvi-EO geospatial foundation model.
    Reference for foundation-model distillation pathway (§07 · Tech Path 01).
    huggingface.co/ibm-nasa-geospatial ↗