Noise-resilient AI that's useful today on digital hardware — and designed to run on analog silicon tomorrow.
Thin models are the future — networks where every weight is reduced to just a handful of values like {-1, 0, +1}. A ternary model is 20× smaller than its floating-point equivalent. Fast enough to run anywhere. But this extreme quantisation introduces massive noise. Stack enough small errors and the model falls apart.
We built a training pipeline that doesn't just tolerate quantisation noise — it corrects for it continuously. Adaptive convergence detects when the model is stuck. Post-training refinement fixes what gradient descent missed.
The result: models that are radically smaller and more accurate than naive quantisation allows. And because the error correction works at any scale, the same techniques unlock value at every stage of our roadmap.
Error correction is the foundation. Each horizon builds on the last — and each delivers value independently.
Ternary and quinary models — weights compressed to just 3 or 5 values — with noise correction that makes them actually work. Ideal for embedding, classification, and edge inference. 20× smaller. No GPU required. Run on any CPU today.
Our state-space architecture replaces attention with constant-time inference. Conversation state lives on the client — 300KB, not gigabytes. A single model serves unlimited users, bounded by CPU, not RAM or GPU.
The same error-corrected, multiply-free architecture maps directly onto analog circuits. Every component has an analog primitive counterpart. No ADC/DAC between layers. Physics does the compute.
Ternary models at 1.6 bits per weight. Same architecture as floating-point, fraction of the memory and compute.
Models small enough to run in a browser, on a microcontroller, or at the edge. No server, no GPU, no install.
Stateless server architecture. Conversation state lives on the client. Capacity scales with CPU, not RAM or GPU.
Full analog forward pass simulated to 512×512 crossbar scale with sub-0.5% computational error.
Handwriting recognition running on simulated resistor circuits. Reproducible. No digital fallback.
Manufacturing noise resilience verified. Models trained to expect and absorb analog imprecision.
"The question isn't whether AI can run on analog hardware.
It's whether we can build AI that deserves to."
Project Hippasus is in active development. Whether you need thin models today, efficient serving tomorrow, or analog silicon in the future — we'd like to hear from you.
hello@exoteric.ai