Want to try Caellum without setting up API keys? Use the free cloud install. It uses the same professional models as the standard setup, proxied through our cloud — zero configuration needed.
Then just /signup and
/login inside Nubby to start building.
Need your own keys later? Run /keys anytime.
Download the CLI, then complete the install in this order: prerequisites, Docker, local Ollama, and Nubby. The download is public, but normal use requires a formal Caellum account. Nubby is federated: Brain, Librarian, Guard, and Council stay cloud-side while the Worker runs locally through Ollama. Works on Linux and WSL2.
Same professional models as standard, proxied through Caellum cloud. Zero configuration — install, signup, login, build.
$ curl -fsSL https://caellum.tech/nubby-install-free.sh | bash
No API keys needed. Rate-limited to 10 req/min (onboarding).
After install: /signup → /login → start building.
Run /keys anytime to add your own keys and remove the rate limit.
Full sovereign setup. Local Ollama Worker (LoRA-ready), cloud Brain/Librarian/Guard/Judge. Choose your own provider per agent with /keys.
$ curl -fsSL https://caellum.tech/nubby-install.sh | bash
After install, run /keys to configure each agent individually:
| Brain | DeepSeek, OpenRouter, or DashScope |
| Librarian | Gemini or OpenRouter |
| Worker | Local Ollama (LoRA-ready), OpenRouter, DashScope, or NVIDIA |
| Guard | DashScope, OpenRouter, or NVIDIA |
| Judge | DashScope, OpenRouter, or NVIDIA |
Requires Docker + Ollama for local Worker. GPU recommended for LoRA training.
Download nubby-install.shUse this path if you want to prepare the machine step by step before running the installer.
Platform Notes: Linux / WSL2
# Step 1: Install prerequisites (Debian/Ubuntu)
$ sudo apt update && sudo apt install -y python3 python3-pip curl git zstd
# Step 2: Install Docker / Docker Compose
$ curl -fsSL https://get.docker.com | sh
$ sudo usermod -aG docker $USER
$ newgrp docker
$ docker compose version
# Step 3: Install local Ollama and start it
$ curl -fsSL https://ollama.com/install.sh | sh
$ ollama serve &
# Step 4: Start the Caelum Ollama container (port 11435)
$ docker compose -f docker-compose.ollama.local.yml up -d
# The installer will pull the required Worker model automatically.
# If you want to pre-pull: docker exec caelum-axon-ollama ollama pull qwen2.5-coder:7b
# Step 5: Install terminal backend — pick ONE of the two options below:
# Option A — Ghostty + Zellij (default)
$ sudo snap install ghostty --edge
# Or build from source: https://ghostty.org/docs/install
$ sudo snap install zellij --classic
# Or: https://zellij.dev/documentation/installation
# Option B — WezTerm + tmux (alternative)
# Install WezTerm: https://wezfurlong.org/wezterm/installation
$ sudo apt install tmux
# Then tell Nubby to use WezTerm:
$ export NUBBY_TERMINAL_BACKEND=wezterm
# Step 7: Install Nubby CLI
$ curl -fsSL https://caellum.tech/nubby-install.sh | bash
docker compose version.curl -fsSL https://caellum.tech/nubby-install.sh | bash./setup, /hybrid status, /deploy docker, and /deploy local.The installer checks each one and stops with a clear message if anything is missing.
| Tool | Why | Install |
|---|---|---|
| Python 3.8+ with pip | Runtime | sudo apt install python3 python3-pip |
| curl, Git, zstd | Installer, source control, Ollama payload | sudo apt install curl git zstd |
| Docker + Compose | Mandatory local Worker substrate and local Odoo bootstrap | curl -fsSL https://get.docker.com | sh |
| Ollama | Mandatory local Worker (runs qwen2.5-coder:7b) | curl -fsSL https://ollama.com/install.sh | sh |
Terminal backend — pick one: Ghostty + Zellij (default) or WezTerm + tmux (alternative).
Nubby auto-detects whichever is installed. Override with NUBBY_TERMINAL_BACKEND=wezterm.
|
||
| Ghostty | Terminal emulator — /workspace and /utility windows | sudo snap install ghostty --edge or build from source |
| Zellij | Multiplexer — /workspace standard detached session | sudo snap install zellij --classic or other methods |
| WezTerm | Terminal emulator — replaces Ghostty | wezfurlong.org/wezterm/installation |
| tmux | Multiplexer — replaces Zellij | sudo apt install tmux |
| GPU | Stronger local Worker performance | — |
The installer enforces these requirements. zstd is needed because the Ollama installer uses it to extract its payload. Local Ollama is mandatory because the Worker runs locally and must load the controlled qwen lineage from Caellum cloud. If Docker/Compose or local Ollama are missing, the installer stops and tells you what to fix first.
This file is for advanced users who want to inspect the packaged release contents before installation, archive a known build, or perform a manual extraction. It does not install Nubby by itself.
nubby-latest.tar.gz.tar -xzf nubby-latest.tar.gz.nubby-cli/ folder if you want to review the packaged files.curl -fsSL https://caellum.tech/nubby-install.sh | bash.Most users should skip the tarball and use the guided installer directly.
$ nubby
Account binding comes first. A formal Caellum account is required before normal usage is unlocked:
nubby> /signup
nubby> /login
Use /signup to create a free account, then /login to bind this local Nubby instance.
After login, /setup starts with the default install profile. This default profile matches the Caellum server baseline and is the recommended Nubby setup for 100% compatibility and minimum model drift. Alternative profiles are available later if you intentionally want to diverge from that baseline:
In the shipped default flow, /setup keeps Nubby aligned with the Caellum server baseline:
| Role | Default | Key Required? | Runs on |
|---|---|---|---|
| Brain | deepseek-chat | DEEPSEEK_API_KEY | Cloud |
| Librarian | gemini-2.5-flash | GEMINI_API_KEY | Cloud |
| Worker | qwen2.5-coder:7b | No external key | Local Ollama (mandatory) |
| Guard | nvidia:qwq-32b | NVIDIA_API_KEY | Cloud |
| Council / Judge | nvidia:qwen3.5-397b-a17b | NVIDIA_API_KEY | Cloud |
nubby> /setup
Interpretation: the Worker is local and mandatory, while Brain, Librarian, Guard, and Council follow the default cloud-coordinated contract unless you intentionally pick a different profile.
Confirm the mandatory federated Worker path is healthy:
nubby> /hybrid status
Nubby manages both local deployment surfaces rooted at /opt/nubby/odoo19:
nubby> /deploy docker
nubby> /deploy local
nubby_odoo19.conf and nubby_odoo19.serviceNubby uses a federated execution model: Brain, Librarian, Guard, and Council stay cloud-governed, while the Worker runs locally through Ollama.
nubby: command not foundThe installer adds ~/.nubby to your PATH. Restart your terminal or run:
$ source ~/.bashrc # or ~/.zshrc
Nubby requires Python 3.8+. Check with python3 --version.
The installer writes to ~/.nubby (your home directory). No sudo is needed. If the symlink to /usr/local/bin fails, it falls back to ~/bin/.
If the Worker gate says Ollama is missing, the runtime is telling you the local model substrate is not ready yet. Install or start Ollama, then rerun the installer or use /hybrid status to confirm the local Worker path.
If Nubby stops with a Docker error, complete this step first and rerun the installer:
# Debian / Ubuntu
$ curl -fsSL https://get.docker.com | sh
$ sudo usermod -aG docker $USER
$ newgrp docker
$ docker compose version
| File | Purpose |
|---|---|
| nubby-install.sh | Guided installer (recommended) |
| nubby-latest.tar.gz | Release tarball — inspect or archive |
| Onboarding Guide | Full guide as HTML — open in browser and print/save as PDF |
Terminal workspace, utility windows, and operator lanes are covered on the Features page.