- Vue 43.8%
- Python 37.3%
- TypeScript 11.4%
- Shell 2.7%
- Makefile 1.5%
- Other 3.2%
| app | ||
| docker | ||
| frontend | ||
| Planning | ||
| scripts | ||
| tests | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| createReview.py | ||
| Makefile | ||
| README.md | ||
| requirements-dev.txt | ||
| requirements.txt | ||
CodeSnuffler Prototype
This is the first containerized slice from the planning docs: a single Docker container running FastAPI, RQ, Valkey, and SQLite. The review worker currently writes a fixed demo response instead of calling Codex or checking out a repository.
Valkey binds to 127.0.0.1 inside the container. Only FastAPI is exposed to your Mac, on port
8000 by default.
Run with Docker Desktop on macOS
From this directory:
make up
To run multiple worktrees or versions at the same time, pass a host port:
make up 8001
make down 8001
The equivalent named-variable form also works:
make up PORT=8001
make down PORT=8001
The Makefile uses the current directory name plus the selected port as the Docker Compose project name and image tag, so each worktree/port pair gets its own image, container, network, and Docker volumes.
make up runs ./scripts/prepare-ai-tool-storage.sh before Docker starts. The Makefile uses
docker/docker-compose.yml and sets the Compose project name from the current worktree and port.
Host-side CodeSnuffler config defaults to:
~/.config/codesnuffler
Set CODESNUFFLER_CONFIG_DIR to use a different host directory for the app config file and AI tool
auth directories. After login, host-side AI tool auth files live at:
~/.config/codesnuffler/auth/codex/auth.json
~/.config/codesnuffler/auth/opencode/auth.json
The config directory is bind-mounted into the container at /config/codesnuffler. The entrypoint
symlinks /codex_home/auth.json to /config/codesnuffler/auth/codex/auth.json and
/root/.local/share/opencode/auth.json to /config/codesnuffler/auth/opencode/auth.json, so the
CLIs can find or create credentials without storing auth material in Docker named volumes.
Then check the service. Replace 8000 with the port you passed to make up:
curl http://localhost:8000/healthz
curl http://localhost:8000/readyz
curl http://localhost:8000/api/instance | jq
curl http://localhost:8000/api/repositories | jq
curl http://localhost:8000/api/codex/status | jq
curl http://localhost:8000/api/opencode/status | jq
Queue a demo review:
curl -X POST http://localhost:8000/api/reviews/manual-test
The response includes a details_url. Open it in your browser. The worker should write:
These are my findings:
Finding 1: don't use the full stl name std::set<Expr> but instead introduce using ExprSet = std::set<Expr> and use that.
Finding 2: A release of BoxSet was dropped in Box.cpp in the function addBoxes.
You can also give the demo review a stable id:
curl -X POST http://localhost:8000/api/reviews/manual-test \
-H 'Content-Type: application/json' \
-d '{"review_id":"PR1253","repo_full_name":"demo/repo","pr_number":1253}'
Then open:
http://localhost:8000/ReviewDetails/PR1253
Run with plain Docker
docker build -f docker/Dockerfile -t codesnuffler .
docker run --rm \
-p 8000:8000 \
-v codesnuffler-data:/data \
-v codesnuffler-aitool-data:/aitool_data \
-v "$HOME/.config/codesnuffler:/config/codesnuffler" \
codesnuffler
Instance homepage and config
Open the container homepage at:
http://localhost:8000/
The homepage shows instance health, configured repository counts, and a first-run getting started state when no repositories are configured.
Basic instance settings are stored outside the container by default:
~/.config/codesnuffler/config.toml
To move the whole host-side config tree used by make up, set:
export CODESNUFFLER_CONFIG_DIR=/path/to/codesnuffler-config
Inside the container this is mounted at:
/config/codesnuffler/config.toml
The whole CodeSnuffler config directory is mounted there. AI-tool auth files use fixed paths under
/config/codesnuffler/auth.
The current setup pages are:
GET /Setup/Instance
GET /Setup/Repositories/Import
/Setup/Repositories/Import opens the repository import flow when provider connections exist, or the provider setup page when none are configured.
Codex CLI setup
The image installs a standalone Codex CLI release binary and sets:
CODEX_HOME=/codex_home
docker/Dockerfile pins the Codex version, release tag, and SHA-256 checksums for the supported Linux
Docker targets, linux/amd64 and linux/arm64:
CODEX_CLI_VERSION=0.130.0
CODEX_CLI_RELEASE_TAG=rust-v0.130.0
The runtime image does not copy Node, npm, npx, or the npm-installed Codex package. Node is still used in a build stage to compile the frontend assets, but it is not present solely for Codex at runtime.
The shared /aitool_data directory is backed by the Compose-managed codesnuffler-aitool-data volume for
non-secret AI tool config, sessions, logs, and caches. The entrypoint links /codex_home to
/aitool_data/codex and creates /codex_home/config.toml on first start with:
cli_auth_credentials_store = "file"
The secret credential cache is different: /codex_home/auth.json is a symlink to the host-backed
file:
~/.config/codesnuffler/auth/codex/auth.json
That path should be excluded from ordinary Docker volume backups or protected separately. Full host filesystem backups may still include it unless your backup policy excludes it.
Check the installed CLI:
make codex-status
To update Codex, edit the CODEX_CLI_VERSION, CODEX_CLI_RELEASE_TAG, and matching
CODEX_CLI_SHA256_AMD64 / CODEX_CLI_SHA256_ARM64 values in docker/Dockerfile, then rebuild the image:
make build
make up
Do not update Codex from inside a running container. Treat CLI version changes as Dockerfile changes so the deployed binary is reproducible and reviewable.
Authenticate with an API key:
export OPENAI_API_KEY=sk-...
make codex-login-api-key
Or use device auth for a ChatGPT login from the container:
make codex-login-device
After login, verify:
curl http://localhost:8000/api/codex/status | jq
curl http://localhost:8000/api/settings/codex | jq
make codex-probe
OpenCode CLI setup
The image also installs a pinned standalone OpenCode CLI release binary and disables OpenCode auto-updates in the runtime environment:
OPENCODE_CLI_VERSION=1.14.45
OPENCODE_CLI_RELEASE_TAG=v1.14.45
OPENCODE_DISABLE_AUTOUPDATE=1
OpenCode config and runtime state stay in the shared AI tool Docker volume. The entrypoint links
OpenCode's native paths into /aitool_data/opencode:
/root/.config/opencode -> /aitool_data/opencode/config
/root/.local/share/opencode -> /aitool_data/opencode/data
The OpenCode auth file is external-only. The entrypoint symlinks
/root/.local/share/opencode/auth.json to:
~/.config/codesnuffler/auth/opencode/auth.json
Authenticate from inside the container:
make opencode-login
After login, verify:
make opencode-status
curl http://localhost:8000/api/settings/opencode | jq
Useful endpoints
GET /healthz
GET /readyz
GET /
GET /Setup/Instance
GET /Setup/Repositories/Import
GET /api/instance
GET /api/repositories
GET /api/providers
GET /api/settings/instance
PUT /api/settings/instance
GET /api/settings/codex
POST /api/settings/codex/recheck
GET /api/settings/opencode
POST /api/settings/opencode/recheck
GET /api/codex/status
GET /api/opencode/status
POST /api/reviews/manual-test
POST /api/reviews/{review_id}/rerun
GET /api/reviews/{review_id}
GET /api/jobs/{job_id}
GET /ReviewDetails/{review_id}
Tests
Install dev dependencies in your preferred Python environment:
python3 -m pip install -r requirements-dev.txt
python3 -m pytest
Notes
- SQLite is stored at
/data/reviews.db. - Instance config is stored at
/config/codesnuffler/config.tomlin the container. - Valkey append-only data is stored under
/data/valkey. - Codex auth is stored at
/config/codesnuffler/auth/codex/auth.jsonfrom the host config mount and symlinked to/codex_home/auth.json. - Other Codex state is stored under
/aitool_data/codex. - OpenCode auth is stored at
/config/codesnuffler/auth/opencode/auth.jsonfrom the host config mount and symlinked to/root/.local/share/opencode/auth.json. - OpenCode config and state are stored under
/aitool_data/opencode. - The current worker is intentionally fake. Git checkout, provider webhooks, AI tool execution, and PR comments are the next planned slices.