Nubby CLI brings multi-model AI to your terminal. Brain for reasoning, Worker for code generation, Guard for validation — all orchestrated locally.
The official release reel for Caellum, presented ahead of the core neural architecture showcase.
Caellum's neural architecture — from concept to code.
Watch how Brain, Librarian, Worker, and Guard engines orchestrate reasoning in real time.
From intention to validated output — signals flow through Caelum's synaptic architecture.
Strategic planning and task decomposition. DeepSeek V3 reasons through complex module architecture before a single line is written.
Knowledge retrieval with 1M context. Gemini 2.5 Flash reads entire specs, extracts patterns, and researches Odoo Apps/OCA modules.
Local GPU code generation via Ollama. Qwen 2.5 Coder runs on your machine — zero latency, zero cloud cost, full privacy.
Security validation at every step. Detects SQL injection, sudo() abuse, missing ACLs. NVIDIA qwq-32b with automatic fallback.
Screenshot-to-code analysis. Reads UI mockups and Figma designs, converts visual intent into OWL components and QWeb views.
Long-term knowledge persistence. Manages development memories, session context, and module patterns across conversations.
Architectural consensus. Three Workers generate solutions, the Judge (qwen 3.5 397B) picks the best approach. For critical decisions.
Background health monitoring. Watches Ollama status, model availability, GPU memory, and auto-recovers from failures silently.
All features included. Pick your scale.
* Estimated lines of code based on typical Odoo module development tasks.
Be among the first to access Caelum when we launch.