Code analysis CLI - reviews, bugs, docs, and refactoring from your terminal.
CodeSight sends your code to LLMs (OpenAI, Anthropic, Google Vertex AI, Ollama, or any OpenAI-compatible endpoint) with structured prompts for code review, bug detection, security analysis, documentation, and refactoring. Multi-provider, configurable, works with any language.
codesight review- code review with severity-tagged issues (crit/warn/info)codesight bugs- find logic errors, race conditions, resource leakscodesight security- security audit with CWE IDs and OWASP mappingcodesight scan .- scan an entire directory with progress barcodesight docs- auto-generate docstrings and module docscodesight explain- plain-language breakdown of complex codecodesight refactor- refactoring suggestions with before/after diffs
# Install
pip install codesight
# Configure your provider
codesight config
# Run a review
codesight review src/main.py
# Detect bugs
codesight bugs lib/parser.py
# Scan a whole project
codesight scan . --task review
codesight scan src/ --ext .py .js
# Generate docs
codesight docs utils/helpers.py| Provider | Models | Setup |
|---|---|---|
| OpenAI | GPT-5.4, GPT-5.3-Codex | OPENAI_API_KEY |
| Anthropic | Claude Opus 4.6, Claude Sonnet 4.6 | ANTHROPIC_API_KEY |
| Google Vertex AI | Gemini 3.1 Pro, Gemini 3.1 Flash | GOOGLE_CLOUD_PROJECT + ADC |
| Ollama (local) | Llama 3, CodeLlama, Mistral, etc. | Just run ollama serve |
| Custom (OpenAI-compatible) | OpenRouter, Groq, Together AI, Mistral, xAI (Grok), Fireworks, DeepSeek, Perplexity, Cerebras, Cohere, Azure AI Foundry, or any OpenAI-compatible URL | codesight config -> Custom, or base_url + API key in ~/.codesight/config.json |
CodeSight stores config in ~/.codesight/config.json. You can configure it interactively:
codesight configOr set environment variables:
export OPENAI_API_KEY="sk-..."
export CODESIGHT_MODEL="gpt-5.4"
codesight review my_file.pySwitch providers on the fly:
codesight review my_file.py --provider anthropic
codesight bugs my_file.py --provider google
codesight explain my_file.py --provider openai
codesight review my_file.py --provider ollama # fully offline, no data leaves your machine
codesight review my_file.py --provider openrouter # any OpenAI-compatible endpoint you saved in configCustom OpenAI-compatible providers (OpenRouter, Groq, Together, Mistral, xAI, Fireworks, DeepSeek, Perplexity, Cerebras, Cohere, Azure AI Foundry) are set up through the wizard:
codesight config
# Select: Custom (OpenRouter / Groq / Together / any OpenAI-compat)
# Pick a preset or enter a custom base URL, save under a label (e.g. "openrouter")
codesight review my_file.py --provider openroutercodesight/
├── cli.py # CLI entry point (argparse)
├── analyzer.py # Core analysis engine
├── config.py # Config management (~/.codesight/)
├── compression.py # Context compression / code maps
├── streaming.py # Streaming output (OpenAI, Anthropic, Ollama)
├── templates.py # Custom prompt templates
├── pipeline.py # Multi-model triage → verify pipeline
├── sarif.py # SARIF output for CI/CD
├── benchmark.py # LLM benchmark runner
├── cost.py # Token cost tracking
└── providers/
├── base.py
├── factory.py
├── openai_provider.py
├── anthropic_provider.py
├── google_provider.py
├── ollama_provider.py
└── custom_provider.py # OpenAI-compatible adapter (OpenRouter, Groq, Azure, etc.)
Drop a .codesight.toml (or .codesight.json) in your repo root to share settings across the team:
max_file_size_kb = 1000
ignore_patterns = ["build/", "migrations/", "*.generated.ts"]
[providers.anthropic]
model = "claude-opus-4-7"Project config can only override benign fields: output_format, language, max_file_size_kb, ignore_patterns, and per-provider model / project_id / region. These are deliberately blocked from project files: api_key (stays in your keyring), base_url (a hostile repo could redirect requests to https://attacker.tld), and default_provider (a hostile repo could switch you from local Ollama to a paid cloud provider and burn your quota). Project-config discovery only runs inside $HOME and refuses to run at all when $HOME is unset (containers, reset-env CI).
Add CodeSight to your .pre-commit-config.yaml:
repos:
- repo: https://github.com/AvixoSec/codesight
rev: v0.3.0
hooks:
- id: codesight-security # or codesight-review / codesight-bugsThen:
pre-commit installCodeSight will run on staged files before every commit. It needs an API key already configured (codesight config) or an env var like OPENAI_API_KEY available at commit time.
git clone https://github.com/AvixoSec/codesight.git
cd codesight
pip install -e ".[dev]"
pytest tests/ -v
ruff check codesight/-
codesight scan .- analyze a whole directory - Ollama support - fully offline analysis with local models
-
codesight security- dedicated security audit with CWE IDs and OWASP mapping -
codesight diff- review only git-changed files - SARIF output - standard format for GitHub Security tab
- Exit codes for CI/CD (0 = clean, 1 = warnings, 2 = critical)
- GitHub Action - auto-scan PRs with SARIF upload
- Multi-model pipeline - fast triage + deep verification
- Cost tracking per query
-
codesight benchmark- test LLMs on vulnerable codebases - Context compression - code maps to reduce token usage
- Streaming output for large files
- Custom prompt templates
- OpenAI-compatible providers (OpenRouter, Groq, Azure, 10+ presets)
- Publish to PyPI
- VS Code extension (scaffold)
- Pre-commit hook integration
- Per-project config (
.codesight.toml) - Gemini streaming via Vertex AI
- Cost pre-estimate (
codesight scan . --estimate) - i18n (English, Russian via
--lang ruorCODESIGHT_LANG=ru) - VS Code Marketplace publish
- Web dashboard
MIT - see LICENSE.