why use many token when few do trick
Before/After β’ Install β’ What You Get β’ Benchmarks β’ Full install guide
A Claude Code skill/plugin (also Codex, Gemini, Cursor, Windsurf, Cline, Copilot, 30+ more) that makes agent talk like caveman β cuts ~75% of output tokens, keeps full technical accuracy. Brain still big. Mouth small.
|
|
|
|
Same fix. 75% less word. Brain still big.
βββββββββββββββββββββββββββββββββββββββ
β TOKENS SAVED ββββββββ 75% β
β TECHNICAL ACCURACY ββββββββ 100%β
β SPEED INCREASE ββββββββ ~3x β
β VIBES ββββββββ OOG β
βββββββββββββββββββββββββββββββββββββββ
Pick your level of grunt β lite (drop filler), full (default caveman), ultra (telegraphic), or wenyan (classical Chinese, even shorter). One command switch. Cost go down forever.
One line. Find every agent. Install for each.
# macOS / Linux / WSL / Git Bash
curl -fsSL https://raw.githubusercontent.com/JuliusBrussee/caveman/main/install.sh | bash
# Windows (PowerShell 5.1+)
irm https://raw.githubusercontent.com/JuliusBrussee/caveman/main/install.ps1 | iex~30 seconds. Needs Node β₯18. Skip agent you no have. Safe to re-run.
Trigger: type /caveman or say "talk like caveman". Stop with "normal mode".
One agent only, manual command, or any of 30+ other agents β INSTALL.md. Install break? Open agent, say "Read CLAUDE.md and INSTALL.md, install caveman for me." Agent fix own brain.
| Skill | What |
|---|---|
/caveman [lite|full|ultra|wenyan] |
Compress every reply. Levels stick until session end. |
/caveman-commit |
Conventional Commit messages, β€50 char subject. Why over what. |
/caveman-review |
One-line PR comments: L42: π΄ bug: user null. Add guard. |
/caveman-stats |
Real session token usage + lifetime savings + USD. Tweetable line via --share. |
/caveman-compress <file> |
Rewrite memory file (e.g. CLAUDE.md) into caveman-speak. Cuts ~46% input tokens every session. Code/URLs/paths byte-preserved. |
caveman-shrink |
MCP middleware. Wraps any MCP server, compresses tool descriptions. npm. |
cavecrew-* |
Caveman subagents (investigator/builder/reviewer). ~60% fewer tokens than vanilla, main context lasts longer. |
Statusline badge β Claude Code shows [CAVEMAN] β 12.4k (lifetime tokens saved). Updates every /caveman-stats run. Set CAVEMAN_STATUSLINE_SAVINGS=0 to silence.
Auto-activate every session: Claude Code, Codex, Gemini (built-in). Cursor / Windsurf / Cline / Copilot get always-on rule files via --with-init. Other agents trigger with /caveman per session. Full feature matrix in INSTALL.md.
Real token counts from the Claude API. Average 65% output reduction across 10 prompts (range 22-87%).
| Task | Normal | Caveman | Saved |
|---|---|---|---|
| Explain React re-render bug | 1180 | 159 | 87% |
| Fix auth middleware token expiry | 704 | 121 | 83% |
| Set up PostgreSQL connection pool | 2347 | 380 | 84% |
| Explain git rebase vs merge | 702 | 292 | 58% |
| Refactor callback to async/await | 387 | 301 | 22% |
| Architecture: microservices vs monolith | 446 | 310 | 30% |
| Review PR for security issues | 678 | 398 | 41% |
| Docker multi-stage build | 1042 | 290 | 72% |
| Debug PostgreSQL race condition | 1200 | 232 | 81% |
| Implement React error boundary | 3454 | 456 | 87% |
| Average | 1214 | 294 | 65% |
Raw data and reproduction script: benchmarks/. Three-arm eval harness (baseline / terse / skill) lives in evals/ β caveman compared against Answer concisely. not against verbose default, so the delta is honest.
caveman-compress receipts (real memory files):
| File | Original | Compressed | Saved |
|---|---|---|---|
claude-md-preferences.md |
706 | 285 | 59.6% |
project-notes.md |
1145 | 535 | 53.3% |
claude-md-project.md |
1122 | 636 | 43.3% |
todo-list.md |
627 | 388 | 38.1% |
mixed-with-code.md |
888 | 560 | 36.9% |
| Average | 898 | 481 | 46% |
Important
Caveman only affects output tokens β thinking/reasoning tokens untouched. Caveman no make brain smaller. Caveman make mouth smaller. Biggest win is readability and speed, cost savings a bonus.
A March 2026 paper "Brevity Constraints Reverse Performance Hierarchies in Language Models" found that constraining large models to brief responses improved accuracy by 26 points on certain benchmarks. Verbose not always better. Sometimes less word = more correct.
- Install drop skill file in agent.
- Skill tell agent: drop filler, keep substance, use fragments.
- For Claude Code, hook also write tiny flag file each session β agent see flag, talk caveman from message one. No need say
/caveman. - Stats command read Claude Code session log, count tokens saved, write number to statusline.
- Caveman-compress sub-skill rewrite memory files (CLAUDE.md, project notes) so each session start with smaller context. Save tokens forever, not just one reply.
Maintainer detail (hook architecture, file ownership, CI sync) live in CLAUDE.md.
OpenClaw the self-host gateway. One box, many agent inside (Claude Code, Codex, Pi, OpenCode), wired to your Slack / Discord / iMessage / Telegram / whatever. Tagline: "The lobster way." Lobster strong. Lobster smart. Lobster also talk a lot.
Caveman teach lobster brevity β same canonical installer, scoped to one agent:
# macOS / Linux / WSL
curl -fsSL https://raw.githubusercontent.com/JuliusBrussee/caveman/main/install.sh | bash -s -- --only openclaw
# Windows (PowerShell): no Node? install Node β₯18 first, then
npx -y github:JuliusBrussee/caveman -- --only openclawTwo thing happen, no more:
- Skill drop at
~/.openclaw/workspace/skills/caveman/SKILL.mdβ spec-correct frontmatter (version,always: true), discoverable byopenclaw skills list. Skill not auto-inject (OpenClaw load skill on demand) β that why we also do step 2. - SOUL.md nudge. Tiny marker-fenced block appended to
~/.openclaw/workspace/SOUL.md. OpenClaw inject SOUL.md into every turn under "Project Context" (12K-per-file, 60K total β block well under). Lobster terse from message one. No/cavemanper session. No nag.
~/.openclaw/workspace/
βββ skills/caveman/SKILL.md β full ruleset, on-demand load
βββ SOUL.md β <!-- caveman-begin --> ... <!-- caveman-end -->
β auto-inject every turn
Custom workspace path? OPENCLAW_WORKSPACE=/your/path before the command. Uninstall: same one-liner with --uninstall β skill folder gone, SOUL.md block ripped out cleanly, your other workspace content stay untouched. Idempotent re-runs (frontmatter not double-prepended, marker block not duplicated).
Lobster claw still sharp. Lobster mouth now small. Brain still big.
Three tools. One philosophy: agent do more with less.
| Repo | What |
|---|---|
| caveman (you here) | Output compression β why use many token when few do trick |
| cavemem | Cross-agent memory β why agent forget when agent can remember |
| cavekit | Spec-driven build loop β why agent guess when agent can know |
Compose: cavekit drive build, caveman compress what agent say, cavemem compress what agent remember. One rock. Two rock. Three rock. That it.
- INSTALL.md β full install matrix, all flags, per-agent detail
- CONTRIBUTING.md β how to send patch
- CLAUDE.md β maintainer guide (file ownership, hook architecture, CI)
- docs/ β extra guides (Windows install, etc.)
- Issues β bug, feature, weird behavior
Caveman save you token, save you money. Star cost zero. Fair trade. β
- Revu β local-first macOS study app with FSRS spaced repetition. revu.cards
MIT β free like mass mammoth on open plain.
