HAUMEA
SPACE INTELLIGENCE · ZERO HUNGER

Crop intelligence,
on-orbit.

Two neural networks · Sentinel-2 · 1.19 MB · real-time

0.000 Crop F1
0.000 Phenology F1
0.00 MB · INT8
~3 ms · Orin Nano
5–8 W · active
10⁵× BW reduction
§ 01 · PROBLEM

Three converging pressures on food security.

90%

of space-generated EO data never reaches an analyst.

A single Sentinel-2 tile is ~1 GB. Downlink bandwidth is the hard ceiling — most of what a satellite sees never makes it back in time to act on. Cloud-side processing adds hours to days of further latency.

715Mt

China's 2025 grain output — 2nd consecutive record.2

Northeast China contributes 70% of national grain growth. Real-time crop monitoring is an explicit priority in the 14th and 15th Five-Year Plans — and current ground-side pipelines are too slow to support same-season intervention.

  • Japonica rice · NE China30 – 50%
  • Soybean self-sufficiency16% national
  • Corn · national production34%
§ 02 · SYSTEM

A complete end-to-end AI inference system, in orbit.

Six onboard modules capture multispectral imagery, reject cloud-contaminated passes, extract the pixels that matter, accumulate observations across the season, run dual AI inference, and downlink compact JSON — all before the ground station pass.

DATA REDUCTION
~10⁵×
From ~1 GB raw capture per acquisition to ~5 KB JSON downlink. The satellite stops sending raw imagery.
TOTAL MODEL FOOTPRINT
1.19 MB
Two specialized neural networks at INT8 precision. Crop F1 0.994 · Phenology F1 0.961 on 11 unseen test regions.
ACTIVE INFERENCE POWER
5–8 W
Full pipeline within Jetson Orin Nano 7 W envelope. Sub-3 ms per-point inference on TensorRT INT8.
02.A · MODULAR

Independent, hot-swappable modules

  • Models, boundaries, and feature recipes updated via command uplink
  • Three profiles: Minimum (crop only), Standard (both), Enhanced (ensemble)
  • Corrupted checkpoint triggers graceful single-task fallback
02.B · HETEROGENEOUS

Right work on the right silicon

  • CPU: I/O, GeoTIFF parsing, feature engineering, JSON packaging
  • Ampere GPU + 32 Tensor Cores: both model forward passes at 40 TOPS INT8
  • Power-aware: 7 W eclipse mode → 15 W sunlight → 25 W MAXN SUPER
02.C · FAULT-TOLERANT

Errors caught, not propagated

  • Watchdog auto-restart for crashed pipeline components
  • Per-inference NaN detection and probability-sum validation
  • Cross-model consistency flag surfaces divergent predictions for review
§ 03 · PIPELINE

From acquisition to insight, in six steps.

Each module cuts the data before handing off to the next. ~1 GB enters; ~5 KB leaves. The satellite becomes a filter.

01–02 · CAPTURE & FILTER

Acquire, calibrate, gate

  • COTS multispectral imager on 6U CubeSat — 12 bands, 100×100 km swath
  • Lightweight cloud-detection SCL rejects cloudy passes before any further compute
  • ~40% of acquisitions discarded early, saving ~40% of seasonal compute
03–04 · EXTRACT & REMEMBER

Points, not pixels

  • AOI crop + DEM orthorectification: 1 GB → 150 MB (85% reduction)
  • 5×5 px neighborhood per monitoring point: 150 MB → ~100 KB per pass
  • NVMe ring buffer accumulates 15–25 clear-sky observations per season
05–06 · REASON & DELIVER

Infer in parallel, downlink compact

  • LTAE + CNN-LSTM run on separate CUDA streams — ~3 ms combined per point
  • Results assembled into a ~5 KB JSON payload with quality metadata
  • Priority flagging elevates operationally significant observations to HIGH-PRIORITY queue
§ 04 · INTELLIGENCE

Two specialists. One decision.

Crop classification is order-agnostic. Phenology staging is directional. Different questions need different architectures — so we built two. They run concurrently on separate CUDA streams; combined latency ~3 ms on Jetson Orin Nano TensorRT INT8.


§ 04.B

4.28 MB → 1.19 MB. Zero accuracy loss.

FP32 → FP16 → INT8 staged compression, validated at every step. KL divergence ≈ 0.002. Prediction flip rate 0.02%. Model updates uplink in 4.7 s at 256 KB/s S-band.

Same model class as ESA's Φ-sat-2 — proven in orbit.6 Heavy training, lean inference. Compression serves two payoffs: smaller uplink and leaner runtime footprint.

§ 05 · ORBIT

Built to survive years in orbit.

Commercial silicon in LEO sees hundreds of cosmic-ray bit-flips per day. Three coordinated defenses wrap every inference cycle.

POWER
5–8 W
Active inference
Jetson Orin Nano 7 W mode
LATENCY
~3 ms
Per-point, LTAE + CNN-LSTM
TensorRT INT8 on Orin Nano
MODEL UPLINK
4.7 s
1.2 MB @ 256 KB/s
S-band telemetry
MODEL SIZE
1.19 MB
INT8 deployed
vs. 600 MB+ foundation models
Runtime Resource Fingerprint
ResourcePeakNote
RAM~500 MB6.3% of 8 GB Orin Nano
CPU60–80%Feature-engineering bound
GPU15–25%Sub-second burst · headroom
NVMe~12 GBFull seasonal dataset
End-to-end~36 sInference <1 s of that
Uplink Dependencies
ItemSize
Mission-start payload~66 MB
Quarterly update<2 MB
External feeds during inferenceNone required
05.A · LAYER-AWARE

RedNet selective redundancy

  • SEU-sensitive layers protected; others run lean5
  • Validated on Chaohu-1 SAR · Jetson Xavier NX
  • 8.4–33% inference speed-up at negligible memory overhead
≈ 0 residual error rate
05.B · PASSIVE DEFENSE

Quantization-induced robustness

  • INT8 weights are ¼ the bit-width of FP32 — proportionally less radiation surface area
  • Compression pipeline doubles as radiation defense at zero added cost
4× fewer vulnerable bits
05.C · WATCHDOG

ROM reload on corruption

  • Reference checkpoint stored in radiation-hardened ROM
  • Checksum verified every inference cycle; reload on mismatch
  • Bounds error accumulation between operator revisits
100% ROM-verified weights
Ground vs. Orbit — Key Constraints
ConstraintGround-based MLHaumea in Orbit
Power30–60 W cloud GPUs5–8 W · Orin Nano 7 W mode
Radiation~0 bit-flips/day100s/day LEO → 3 active defenses
AutonomyAlways-on, low-latency cloudEclipse cycles, no operator in loop
BandwidthEffectively unlimited~10⁵× reduction — mandatory
OPERATIONAL PROFILES · Minimum · LTAE only · ~190 KB Standard · Both models · 1.2 MB Enhanced · Ensemble · ~4 MB
§ 06 · IMPACT

Satellite as filter, not sensor.

The fundamental shift our system enables — and three tiers of what it makes possible.

Today · satellite as sensor

Stream everything, interpret on the ground

1 GB
raw tile downlinked per acquisition
  • Bandwidth-limited — minutes per tile
  • All processing happens on the ground
  • Hours to days latency from observation to insight
Ours · satellite as filter

Only insights leave orbit

< 5 KB
JSON insight downlinked per acquisition
  • ~10⁵× downlink reduction — bandwidth ceases to be the bottleneck
  • Onboard inference runs during the pass — insights ready before downlink
  • Same orbit produces actionable output

Precedent: Three-Body Constellation ran an 8B-param model in orbit, census across 189 km² NW China (Nov 2025).8 Our system is 2,000× smaller and purpose-built for the task.

TIER 01 · DIRECT

Real-time grain monitoring

  • 82 Mt from NE China alone in 20252
  • Same-orbit detection of stress, drought, and phenology transitions
  • Days-earlier intervention vs. ground-side processing pipelines
TIER 02 · COORDINATED

Satellite-terrestrial intelligence loop

  • Onboard AI filters which observations matter
  • BeiDou geo-references insights to field equipment
  • 33M BeiDou terminals9 carry advisories to autonomous machinery
TIER 03 · TRANSFER

Cross-mission generalization

MissionWhat changes
ForestryLabels · training data
Water resourcesSpectral feature set
UrbanAOI boundaries · class labels
Disaster responseLabels · trigger logic

Pipeline · compression · modules stay the same.

§ 07 · ROADMAP

Three horizons. BeiDou 2027. 15th FYP.

2026 – 27

Distillation & HIL

INT8 deployed; hardware-in-the-loop validation on Jetson Orin Nano reference platform.

2027 – 29

In-orbit demonstration

Partner with Three-Body Constellation, Aurora 1000, or similar for first orbital demo.

2030 +

Operational service

Daily-revisit grain monitoring; integrated with 15th-FYP national agricultural infrastructure.

Technical improvement paths

07.A · INT4

INT4 group-wise quantization

  • ~0.99 MB total — 23% of FP32 size
  • Requires Jetson Orin Tensor Cores for hardware acceleration
07.B · DISTILLATION

Foundation-model distillation

  • Distill from Prithvi-EO, RemoteCLIP10
  • Pre-train upstream · deploy compact downstream
07.C · FUSION

Multi-modal fusion · S1 + S2

  • Add Sentinel-1 SAR for cloud-penetrating coverage
  • Critical for NE China summer cloud cover degradation
07.D · FEDERATED

Federated constellation learning

  • Gradient deltas shared via inter-satellite laser links
  • Models continuously improve across the constellation

From HAUMEA.

§ · REFERENCES

References.

  1. ESA · "Φ-sat-1 on-board AI for Earth observation" — esa.int
  2. NBS China · "2025 Grain Output Bulletin" — stats.gov.cn
  3. State Council PRC · "14th / 15th Five-Year Agricultural Modernization Plans" — english.www.gov.cn
  4. Rey et al. (2025) · "Jetson Orin Nano power benchmarks" — arXiv:2502.15737
  5. Wang et al. (2024) · "RedNet: radiation-tolerant on-orbit inference" — arXiv:2407.11853
  6. ESA · "Φ-sat-2 onboard AI demonstrator" — esa.int
  7. Planet Labs · "Pelican-4 onboard inference demonstration, March 2026" — planet.com
  8. ADA Space · "Three-Body Constellation, NW China census, Nov 2025" — adaspace.com
  9. CSNO · "BeiDou Navigation Satellite System Annual Report" — en.beidou.gov.cn
  10. IBM & NASA · "Prithvi-EO geospatial foundation model" — huggingface.co
  11. ESA · "Sentinel-2 mission, Copernicus Open Hub" — sentinels.copernicus.eu