Install Nubby CLI

The Metastic Odoo Code Forge (v19)
💡

Quick Tip — Get Running in Under 5 Minutes

Want to try Caellum without setting up API keys? Use the free cloud install. It uses the same professional models as the standard setup, proxied through our cloud — zero configuration needed.

$ curl -fsSL https://caellum.tech/nubby-install-free.sh | bash

Then just /signup and /login inside Nubby to start building. Need your own keys later? Run /keys anytime.

Download the CLI, then complete the install in this order: prerequisites, Docker, local Ollama, and Nubby. The download is public, but normal use requires a formal Caellum account. Nubby is federated: Brain, Librarian, Guard, and Council stay cloud-side while the Worker runs locally through Ollama. Works on Linux and WSL2.

Why this matters The installer checks the local Worker path, keeps the default roles aligned with the Caellum baseline, and prevents drift during upgrades.

Installation Ways

1. Free Cloud Install (fastest — no keys, no GPU)

Same professional models as standard, proxied through Caellum cloud. Zero configuration — install, signup, login, build.

$ curl -fsSL https://caellum.tech/nubby-install-free.sh | bash

No API keys needed. Rate-limited to 10 req/min (onboarding).
After install: /signup/login → start building.
Run /keys anytime to add your own keys and remove the rate limit.

Download nubby-install-free.sh

2. Standard Install (best quality — configure each agent)

Full sovereign setup. Local Ollama Worker (LoRA-ready), cloud Brain/Librarian/Guard/Judge. Choose your own provider per agent with /keys.

$ curl -fsSL https://caellum.tech/nubby-install.sh | bash

After install, run /keys to configure each agent individually:

BrainDeepSeek, OpenRouter, or DashScope
LibrarianGemini or OpenRouter
WorkerLocal Ollama (LoRA-ready), OpenRouter, DashScope, or NVIDIA
GuardDashScope, OpenRouter, or NVIDIA
JudgeDashScope, OpenRouter, or NVIDIA

Requires Docker + Ollama for local Worker. GPU recommended for LoRA training.

Download nubby-install.sh

3. Detailed installation steps

Use this path if you want to prepare the machine step by step before running the installer.

Platform Notes: Linux / WSL2

# Step 1: Install prerequisites (Debian/Ubuntu)
$ sudo apt update && sudo apt install -y python3 python3-pip curl git zstd

# Step 2: Install Docker / Docker Compose
$ curl -fsSL https://get.docker.com | sh
$ sudo usermod -aG docker $USER
$ newgrp docker
$ docker compose version

# Step 3: Install local Ollama and start it
$ curl -fsSL https://ollama.com/install.sh | sh
$ ollama serve &

# Step 4: Start the Caelum Ollama container (port 11435)
$ docker compose -f docker-compose.ollama.local.yml up -d
# The installer will pull the required Worker model automatically.
# If you want to pre-pull: docker exec caelum-axon-ollama ollama pull qwen2.5-coder:7b

# Step 5: Install terminal backend — pick ONE of the two options below:

# Option A — Ghostty + Zellij (default)
$ sudo snap install ghostty --edge
# Or build from source: https://ghostty.org/docs/install
$ sudo snap install zellij --classic
# Or: https://zellij.dev/documentation/installation

# Option B — WezTerm + tmux (alternative)
# Install WezTerm: https://wezfurlong.org/wezterm/installation
$ sudo apt install tmux
# Then tell Nubby to use WezTerm:
$ export NUBBY_TERMINAL_BACKEND=wezterm

# Step 7: Install Nubby CLI
$ curl -fsSL https://caellum.tech/nubby-install.sh | bash
Installation Process (click to expand)
  1. Install prerequisites: Python 3, pip, curl, git, and zstd.
  2. Install Docker / Docker Compose and verify it with docker compose version.
  3. Install Ollama and start it so the local Worker substrate is reachable.
  4. Start the Caelum Ollama container on port 11435 — this is the Worker runtime endpoint the installer checks.
  5. Install a terminal backend: Ghostty + Zellij (default) or WezTerm + tmux (alternative). Nubby auto-detects whichever is installed.
  6. Run the guided installer with curl -fsSL https://caellum.tech/nubby-install.sh | bash.
  7. The installer checks Python, pip, curl, git, zstd, Docker, Docker Compose, and the local Worker substrate before it finishes.
  8. After first run, bind a free Caellum account, then continue with /setup, /hybrid status, /deploy docker, and /deploy local.

Prerequisites

Install these before running the installer (click to expand)

The installer checks each one and stops with a clear message if anything is missing.

ToolWhyInstall
Python 3.8+ with pipRuntimesudo apt install python3 python3-pip
curl, Git, zstdInstaller, source control, Ollama payloadsudo apt install curl git zstd
Docker + ComposeMandatory local Worker substrate and local Odoo bootstrapcurl -fsSL https://get.docker.com | sh
OllamaMandatory local Worker (runs qwen2.5-coder:7b)curl -fsSL https://ollama.com/install.sh | sh
Terminal backend — pick one: Ghostty + Zellij (default)  or  WezTerm + tmux (alternative). Nubby auto-detects whichever is installed. Override with NUBBY_TERMINAL_BACKEND=wezterm.
GhosttyTerminal emulator — /workspace and /utility windowssudo snap install ghostty --edge or build from source
Zellij (optional)Multiplexer — /workspace standard detached sessionsudo snap install zellij --classic or other methods
WezTerm (alternative)Terminal emulator — replaces Ghosttywezfurlong.org/wezterm/installation
tmux (alternative)Multiplexer — replaces Zellijsudo apt install tmux
GPU (optional)Stronger local Worker performance

The installer enforces these requirements. zstd is needed because the Ollama installer uses it to extract its payload. Local Ollama is mandatory because the Worker runs locally and must load the controlled qwen lineage from Caellum cloud. If Docker/Compose or local Ollama are missing, the installer stops and tells you what to fix first.

Release Tarball

Download Release Tarball

This file is for advanced users who want to inspect the packaged release contents before installation, archive a known build, or perform a manual extraction. It does not install Nubby by itself.

  1. Download nubby-latest.tar.gz.
  2. Extract it with tar -xzf nubby-latest.tar.gz.
  3. Inspect the extracted nubby-cli/ folder if you want to review the packaged files.
  4. If you want to actually install Nubby, go back and run the guided installer: curl -fsSL https://caellum.tech/nubby-install.sh | bash.

Most users should skip the tarball and use the guided installer directly.

After Installation

1. Run Nubby

$ nubby

2. Create or bind your formal Caellum account

Account binding comes first. A formal Caellum account is required before normal usage is unlocked:

nubby> /signup
nubby> /login

Use /signup to create a free account, then /login to bind this local Nubby instance.

3. Run setup

Configure roles and install profiles (click to expand)

After login, /setup starts with the default install profile. This default profile matches the Caellum server baseline and is the recommended Nubby setup for 100% compatibility and minimum model drift. Alternative profiles are available later if you intentionally want to diverge from that baseline:

In the shipped default flow, /setup keeps Nubby aligned with the Caellum server baseline:

RoleDefaultKey Required?Runs on
Braindeepseek-chatDEEPSEEK_API_KEYCloud
Librariangemini-2.5-flashGEMINI_API_KEYCloud
Workerqwen2.5-coder:7bNo external keyLocal Ollama (mandatory)
Guardnvidia:qwq-32bNVIDIA_API_KEYCloud
Council / Judgenvidia:qwen3.5-397b-a17bNVIDIA_API_KEYCloud
nubby> /setup

Interpretation: the Worker is local and mandatory, while Brain, Librarian, Guard, and Council follow the default cloud-coordinated contract unless you intentionally pick a different profile.

4. Verify the local Worker

Confirm the mandatory federated Worker path is healthy:

nubby> /hybrid status

5. Scaffold the two local Odoo 19 deployment environments

Nubby manages both local deployment surfaces rooted at /opt/nubby/odoo19:

nubby> /deploy docker
nubby> /deploy local

How It Works

Nubby uses a federated execution model: Brain, Librarian, Guard, and Council stay cloud-governed, while the Worker runs locally through Ollama.

Local Worker requirement Local Ollama is not optional for normal Nubby usage. The installer verifies it, and Nubby blocks normal usage until the Worker path is ready.

Troubleshooting

nubby: command not found

The installer adds ~/.nubby to your PATH. Restart your terminal or run:

$ source ~/.bashrc   # or ~/.zshrc
Python version too old

Nubby requires Python 3.8+. Check with python3 --version.

Permission denied

The installer writes to ~/.nubby (your home directory). No sudo is needed. If the symlink to /usr/local/bin fails, it falls back to ~/bin/.

Local Ollama missing

If the Worker gate says Ollama is missing, the runtime is telling you the local model substrate is not ready yet. Install or start Ollama, then rerun the installer or use /hybrid status to confirm the local Worker path.

Install Docker / Docker Compose first

If Nubby stops with a Docker error, complete this step first and rerun the installer:

# Debian / Ubuntu
$ curl -fsSL https://get.docker.com | sh
$ sudo usermod -aG docker $USER
$ newgrp docker
$ docker compose version

Downloads

FilePurpose
nubby-install.shGuided installer (recommended)
nubby-latest.tar.gzRelease tarball — inspect or archive
Onboarding GuideFull guide as HTML — open in browser and print/save as PDF