Project Hippasus

Making AI
ubiquitous.

Noise-resilient AI that's useful today on digital hardware — and designed to run on analog silicon tomorrow.

20× smaller models
0 multiplications
0 server-side memory
0 GPUs required
Learn more

It all starts with error correction.

Thin models are the future — networks where every weight is reduced to just a handful of values like {-1, 0, +1}. A ternary model is 20× smaller than its floating-point equivalent. Fast enough to run anywhere. But this extreme quantisation introduces massive noise. Stack enough small errors and the model falls apart.

We built a training pipeline that doesn't just tolerate quantisation noise — it corrects for it continuously. Adaptive convergence detects when the model is stuck. Post-training refinement fixes what gradient descent missed.

The result: models that are radically smaller and more accurate than naive quantisation allows. And because the error correction works at any scale, the same techniques unlock value at every stage of our roadmap.

Standard quantisation
high noise
Our pipeline
corrected

One innovation. Three horizons.

Error correction is the foundation. Each horizon builds on the last — and each delivers value independently.

Now

Thin models that work.

Ternary and quinary models — weights compressed to just 3 or 5 values — with noise correction that makes them actually work. Ideal for embedding, classification, and edge inference. 20× smaller. No GPU required. Run on any CPU today.

Embeddings Edge AI On-device
Near term

Servers without limits.

Our state-space architecture replaces attention with constant-time inference. Conversation state lives on the client — 300KB, not gigabytes. A single model serves unlimited users, bounded by CPU, not RAM or GPU.

Stateless serving ∞ concurrency CPU-only
Horizon

Pure analog silicon.

The same error-corrected, multiply-free architecture maps directly onto analog circuits. Every component has an analog primitive counterpart. No ADC/DAC between layers. Physics does the compute.

SPICE validated Analog inference Zero FPU

We don't build on promises.

20×
compression

Ternary models at 1.6 bits per weight. Same architecture as floating-point, fraction of the memory and compute.

Anywhere
inference

Models small enough to run in a browser, on a microcontroller, or at the edge. No server, no GPU, no install.

concurrent users

Stateless server architecture. Conversation state lives on the client. Capacity scales with CPU, not RAM or GPU.

SPICE
validated

Full analog forward pass simulated to 512×512 crossbar scale with sub-0.5% computational error.

95%
MNIST on circuits

Handwriting recognition running on simulated resistor circuits. Reproducible. No digital fallback.

<1.2%
error at 5% variance

Manufacturing noise resilience verified. Models trained to expect and absorb analog imprecision.

"The question isn't whether AI can run on analog hardware.
It's whether we can build AI that deserves to."

Let's talk.

Project Hippasus is in active development. Whether you need thin models today, efficient serving tomorrow, or analog silicon in the future — we'd like to hear from you.

hello@exoteric.ai