AI-powered development log generator for git repositories
Common issues and their solutions.
Cause: Binary not in system PATH
Solutions:
./target/release/devlogcargo install --path .export PATH="$HOME/.cargo/bin:$PATH"Cause: Missing system dependencies
Solutions:
# Ubuntu/Debian
sudo apt-get install build-essential libssl-dev pkg-config
# macOS
xcode-select --install
# Fedora/RHEL
sudo dnf install gcc openssl-devel
Check if Ollama is running:
curl http://localhost:11434/api/tags
Start Ollama:
ollama serve
Check firewall: Ensure port 11434 is not blocked
Pull the model:
ollama pull llama3.2
List available models:
ollama list
Check server is running:
curl http://localhost:8080/health
Start llama.cpp server:
./server -m models/your-model.gguf -c 2048
Check tags exist:
git tag -l
Use commit hashes instead:
devlog --from abc1234 --to def5678
Check you’re in a git repo:
git status
Ensure you’re in git directory:
cd /path/to/git/repo
devlog --repo .
Check git is initialized:
git log --oneline
Set environment variable:
export OPENAI_API_KEY="sk-..."
Or specify in command (not recommended):
devlog --llm openai --llm-model gpt-4
# Will prompt for key
Set API key:
export ANTHROPIC_API_KEY="sk-ant-..."
Verify key is correct:
echo $ANTHROPIC_API_KEY
Use smaller model:
# Instead of llama3.2:70b, use:
ollama pull llama3.2 # Default 7B model
devlog --llm ollama --llm-model llama3.2
Limit commit range:
devlog --limit 50 # Instead of full history
Use plain mode for quick results:
devlog --from v1.0.0 --to v2.0.0 # No LLM
Increase system memory
Use llama.cpp with quantized model:
# Download smaller quantized model (Q4 instead of F16)
Process in smaller batches:
# Split into smaller ranges
devlog --from v1.0.0 --to v1.5.0
devlog --from v1.5.0 --to v2.0.0
Check for errors:
devlog --from v1.0.0 --to v2.0.0 2>&1 | tee debug.log
Enable debug logging:
export RUST_LOG="devlog=debug"
devlog --from v1.0.0 --to v2.0.0
Specify output file:
devlog --output CHANGELOG.md
Check terminal encoding: Ensure UTF-8 support
Use stricter privacy mode:
devlog --llm openai --privacy-level strict --dry-run
Only use local LLMs for sensitive code:
devlog --llm ollama --diff-analysis
Use moderate mode:
devlog --llm openai --privacy-level moderate
Or use relaxed with local LLM:
devlog --llm ollama --privacy-level relaxed
Still having issues?
| Documentation: GitHub Pages | GitLab Pages |
When reporting issues, include:
rustc --version)devlog --version)