Generate jj (Jujutsu) commit descriptions automatically using LLMs.
English | 日本語
- 🤖 Automatically generates meaningful commit descriptions from diffs using LLMs
- 📦 Batch processing: Process multiple commits at once with revset targeting
- 🔄 Works seamlessly with jj's undo workflow (no confirmation prompts needed)
- 🎯 Supports multiple LLM providers: OpenRouter, OpenAI, Anthropic, Gemini
- 🔌 Custom endpoint support (Azure OpenAI, Ollama, LM Studio, etc.)
- 🔍 Preview mode with
--dry-run - 💬 Interactive mode for reviewing each description before applying
- 🎚️ Flexible targeting with jj revset syntax
- 📝 Follows Conventional Commits format
- 🔀 Handles merge commits automatically (no LLM call needed for empty merge commits)
- ⚡ Optimized for large diffs: automatically excludes lock files and simplifies binary files
- 🎛️ Customizable file exclusions with
--excludeoption
The recommended way to install on macOS or Linux:
brew install tumf/tap/jj-descDownload pre-built binaries for your platform from the latest release.
Using installer script (recommended):
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tumf/jj-desc/releases/latest/download/jj-desc-installer.sh | shManual download:
- Apple Silicon (M1/M2/M3): jj-desc-aarch64-apple-darwin.tar.xz
- Intel: jj-desc-x86_64-apple-darwin.tar.xz
# Example for Apple Silicon
curl -LO https://github.com/tumf/jj-desc/releases/latest/download/jj-desc-aarch64-apple-darwin.tar.xz
tar xf jj-desc-aarch64-apple-darwin.tar.xz
sudo mv jj-desc /usr/local/bin/Using installer script (recommended):
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/tumf/jj-desc/releases/latest/download/jj-desc-installer.sh | shManual download:
# Example for x86_64
curl -LO https://github.com/tumf/jj-desc/releases/latest/download/jj-desc-x86_64-unknown-linux-gnu.tar.xz
tar xf jj-desc-x86_64-unknown-linux-gnu.tar.xz
sudo mv jj-desc /usr/local/bin/Using PowerShell installer (recommended):
powershell -ExecutionPolicy Bypass -c "irm https://github.com/tumf/jj-desc/releases/latest/download/jj-desc-installer.ps1 | iex"Manual download:
Download jj-desc-x86_64-pc-windows-msvc.zip and extract jj-desc.exe to a directory in your PATH.
If you have Rust installed:
cargo install --git https://github.com/tumf/jj-descOr build from a local clone:
git clone https://github.com/tumf/jj-desc
cd jj-desc
cargo install --path .- jj installed and available in PATH
- API key for your chosen LLM provider
- For building from source: Rust 1.85+ (Edition 2024)
Choose your LLM provider using the LLM_PROVIDER environment variable or --provider CLI option:
export LLM_PROVIDER=openai # Options: openrouter, openai, anthropic, gemini| Variable | Required | Default | Description |
|---|---|---|---|
LLM_PROVIDER |
❌ | openrouter |
LLM provider to use |
LLM_MODEL |
❌ | (provider default) | Override the model |
| Provider | Environment Variable | Get API Key |
|---|---|---|
| OpenRouter | OPENROUTER_API_KEY |
OpenRouter |
| OpenAI | OPENAI_API_KEY |
OpenAI Platform |
| Anthropic | ANTHROPIC_API_KEY |
Anthropic Console |
| Gemini | GEMINI_API_KEY |
Google AI Studio |
Override the default API endpoint for custom setups:
| Provider | Environment Variable | Default Value |
|---|---|---|
| OpenRouter | OPENROUTER_BASE_URL |
https://openrouter.ai/api/v1 |
| OpenAI | OPENAI_BASE_URL |
https://api.openai.com/v1 |
| Anthropic | ANTHROPIC_BASE_URL |
https://api.anthropic.com |
| Gemini | GEMINI_BASE_URL |
https://generativelanguage.googleapis.com/v1beta/openai |
| Provider | Default Model |
|---|---|
| OpenRouter | anthropic/claude-sonnet-4 |
| OpenAI | gpt-4o |
| Anthropic | claude-sonnet-4-20250514 |
| Gemini | gemini-2.0-flash |
export OPENROUTER_API_KEY="your-api-key-here"export LLM_PROVIDER=openai
export OPENAI_API_KEY="sk-..."export LLM_PROVIDER=anthropic
export ANTHROPIC_API_KEY="sk-ant-..."export LLM_PROVIDER=gemini
export GEMINI_API_KEY="..."export LLM_PROVIDER=openai
export OPENAI_API_KEY="your-azure-key"
export OPENAI_BASE_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment"
export LLM_MODEL="gpt-4"export LLM_PROVIDER=openai
export OPENAI_API_KEY="dummy" # Ollama doesn't require a key
export OPENAI_BASE_URL="http://localhost:11434/v1"
export LLM_MODEL="llama2"export LLM_PROVIDER=openai
export OPENAI_API_KEY="dummy" # LM Studio doesn't require a key
export OPENAI_BASE_URL="http://localhost:1234/v1"
export LLM_MODEL="your-model-name"For permanent setup, add these to your shell configuration (~/.bashrc, ~/.zshrc, etc.).
By default, jj-desc generates descriptions for all mutable commits without descriptions in ::@ & mutable():
jj-descGenerate and apply a description for the current working copy:
jj-desc -r @Target specific revisions using jj's revset syntax:
# Process your own commits
jj-desc -r "mine()"
# Process commits in a specific range
jj-desc -r "@..main"
# Process a single specific revision
jj-desc -r @-
# Limit the number of commits to process
jj-desc -n 5
# Preview before applying
jj-desc --dry-run
# Interactive mode - confirm each description
jj-desc -iSee what description would be generated without applying it:
jj-desc --dry-runOverride the provider:
jj-desc --provider openai
jj-desc --provider anthropicOverride the default model:
jj-desc --model gpt-4o
# or
jj-desc --model anthropic/claude-3.5-sonnetEnable detailed logging for debugging:
jj-desc --verboseOr use the RUST_LOG environment variable:
RUST_LOG=debug jj-descExclude specific files or patterns from the diff sent to the LLM:
# Exclude specific files
jj-desc --exclude "*.json" --exclude "*.yaml"
# Short form
jj-desc -x "docs/*" -x "*.lock"Automatically excluded files:
- Lock files:
Cargo.lock,package-lock.json,pnpm-lock.yaml,yarn.lock,*.lock,*.lockb - Binary files are automatically simplified to
"Binary file {path} changed"
Why exclude files?
- Reduces token usage and costs
- Prevents API errors from exceeding context limits
- Improves description quality by focusing on meaningful changes
Large diff warning: If your diff exceeds 50KB after filtering, you'll see a warning:
⚠ Warning: Diff is large (75000 bytes, 3500 lines)
Consider splitting into smaller commits.
Usage: jj-desc [OPTIONS]
Options:
--dry-run Preview the generated descriptions without applying them
--provider <PROVIDER> LLM provider to use (openrouter, openai, anthropic, gemini)
--model <MODEL> Override the LLM model to use
--max-tokens <MAX_TOKENS> Maximum tokens for LLM response
--temperature <TEMPERATURE> Temperature for LLM response (0.0-2.0)
-r, --revisions <REVISIONS> Revset to select target commits [default: "::@ & mutable()"]
-n, --limit <LIMIT> Maximum number of commits to process
-i, --interactive Ask for confirmation before applying each description
-x, --exclude <EXCLUDE> Files to exclude from diff (can be specified multiple times)
-v, --verbose Enable verbose logging
-h, --help Print help
-V, --version Print version
# Make some changes
echo "fn hello() {}" >> lib.rs
# Generate description for current working copy
jj-desc
# Output:
# Applied description:
# ─────────────────────
# feat: add hello functionjj-desc --dry-run
# Output:
# Generated description (not applied):
# ─────────────────────
# feat(auth): add JWT authentication# Fill descriptions for all mutable commits without descriptions
jj-desc
# Output:
# Found 3 commit(s) without descriptions
# Processing 3 commit(s)
#
# Processing: 1/3 (33%)
# Commit: abc123def456
# Generated description:
# feat(auth): add authentication endpoint
# ✓ Description applied
#
# Processing: 2/3 (66%)
# Commit: def456ghi789
# Generated description:
# fix(auth): fix validation bug in login form
# ✓ Description applied
#
# Processing: 3/3 (100%)
# Commit: ghi789jkl012
# Generated description:
# chore(deps): update dependencies
# ✓ Description applied
#
# ═══════════════════════
# Summary:
# Success: 3
# Skipped: 0
# Failed: 0
# ═══════════════════════jj-desc -i -r "mine()"
# For each commit, you'll see:
# Processing: 1/5 (20%)
# Commit: abc123
# Generated description:
# feat(auth): add user authentication
#
# Full description:
# ─────────────────────
# feat(auth): add JWT authentication
#
# Implements login and logout endpoints with secure
# token generation and validation.
# ─────────────────────
# Accept (a) / Skip (s) / Quit (q): a
# ✓ Description applied- Runs
jj diffto get the current changes - Filters the diff to optimize for LLM processing:
- Automatically excludes lock files (
Cargo.lock,package-lock.json, etc.) - Simplifies binary files to
"Binary file {path} changed" - Applies user-specified exclusions via
--exclude - Warns if diff exceeds 50KB
- Automatically excludes lock files (
- If the filtered diff is empty:
- Checks if it's a merge commit (using
jj log -T 'parents.len()') - If yes, sets description to "Merge commit" without calling LLM
- If no, returns an error
- Checks if it's a merge commit (using
- If the diff is not empty, sends it to your chosen LLM provider API with a specialized prompt
- Applies the generated description using
jj desc -m
jj often marks merge commits as "empty" because they don't introduce new changes themselves (see jj FAQ). jj-desc detects merge commits automatically and sets an appropriate description without requiring LLM API calls.
You can integrate jj-desc into your push workflow by adding a jj alias. This runs jj-desc automatically before every push:
# Edit your jj config
jj config edit --userAdd the following alias:
[aliases]
push = ["util", "exec", "--", "bash", "-c", """
set -e
# Generate descriptions for commits without them (if jj-desc is available)
command -v jj-desc &> /dev/null && jj-desc
# Run pre-commit checks if config exists
[ ! -f .pre-commit-config.yaml ] || pre-commit run --all-files
# Push
jj git push \"$@\"
""", ""]Now jj push will:
- Auto-generate descriptions for commits without them
- Run pre-commit checks (if configured)
- Push to remote
To bypass, use jj git push directly.
Unlike git, jj makes it extremely easy to undo any operation:
jj undo- Undo the last operationjj op log- View operation history- All changes are recoverable
This design philosophy means we can safely apply descriptions immediately, making the workflow faster and more streamlined.
MIT
Contributions are welcome! Please see CONTRIBUTING.md for development setup, coding guidelines, and how to submit changes.
The previous version only supported OpenRouter. The new version maintains full backward compatibility:
- Existing
OPENROUTER_API_KEYenvironment variable continues to work - Existing
OPENROUTER_MODELenvironment variable continues to work - No changes needed to your configuration
To take advantage of new providers, simply set LLM_PROVIDER and the appropriate API key.
