An AI-powered agent that analyzes multi-language codebases, identifies quality issues, and produces actionable reports with explanations, fix suggestions, prioritization, hotspots, and interactive Q&A over the codebase.
π― Features β’ β‘ Quick Start β’ π§ Setup β’ π Usage β’ π€ AI Integration β’ π Deployment β’ π€ Contributing
- π₯ Explanation Video
- π Architecture
- π― Features
- β‘ Quick Start
- π§ Setup
- π Usage
- π€ AI Integration
- π¨ Web Interface
- π CLI Commands
- π Visualizations
- π Deployment
- π€ Contributing
- π§ Troubleshooting
- π License
- β HD Quality: 1080p recording for clear visibility
- β Screen Recording: Full desktop capture with annotations
- β Audio Commentary: Clear explanations throughout
- β Code Highlighting: Syntax highlighting for better understanding
- β Interactive Elements: Live demonstrations of all features
- β Multiple Scenarios: Different repository types analyzed
- π₯ YouTube: Watch on YouTube
- πΊ Live Link of project: [Live Link Access](scoming soon ...)
- π± Mobile Friendly: Optimized for mobile viewing
- β―οΈ Playback Controls: Pause, rewind, and speed control
"See how the Code Quality Intelligence Agent transforms complex codebases into actionable insights with AI-powered analysis, interactive visualizations, and intelligent recommendations."
- π§βπ» CLI: Watch live CLI demo
- π Real-time Analysis: Watch live code analysis in action
- π Interactive Dashboards: Explore dynamic visualizations
- π€ AI Conversations: Experience natural language Q&A
- π One-Click Deployment: See effortless setup process
- π Comprehensive Reports: Generate detailed quality reports
graph TB A[User Interface] --> B[CLI/Web] B --> C[Analysis Engine] C --> D[Language Analyzers] C --> E[Quality Metrics] C --> F[Dependency Graph] C --> G[AI Integration] D --> H[Python Analyzers] D --> I[JS/TS Analyzers] E --> J[Hotspot Detection] E --> K[Issue Prioritization] F --> L[Network Analysis] F --> M[Hierarchy Building] G --> N[DeepSeek API] G --> O[Local LLM] G --> P[Hugging Face] C --> Q[Visualization Engine] Q --> R[Plotly Charts] Q --> S[Interactive Graphs] C --> T[Report Generation] T --> U[Markdown] T --> V[SARIF] T --> W[CSV] - Python: Ruff, Bandit, Radon analysis
- JavaScript/TypeScript: ESLint with security plugins
- Smart Detection: Automatic language detection and appropriate tooling
- DeepSeek Integration: Remote AI for advanced analysis
- Local LLM Fallback: Offline AI using Hugging Face models
- Smart Severity Scoring: AI-enhanced issue prioritization
- Conversational Q&A: Natural language codebase exploration
- Interactive Charts: Plotly-powered dashboards
- Dependency Graphs: Network analysis and hierarchy visualization
- Hotspot Analysis: Code complexity and churn heatmaps
- Trend Analysis: Quality metrics over time
- Incremental Caching: Smart file change detection
- Parallel Processing: Multi-threaded analysis
- Sampling Tiers: Efficient large repository handling
- Fast Mode: Optimized for 1000+ file repositories
- Modern Web UI: Streamlit with glassmorphism design
- CLI Interface: Command-line tool for CI/CD integration
- Autofix Capabilities: Safe automated code improvements
- Export Options: Markdown, SARIF, CSV reports
- Python 3.11 or higher
- Git (for repository analysis)
- 4GB+ RAM (for AI features)
# Navigate to Code-Quality-Agent directory cd Code-Quality-Agent # Run the installation script python install_global.py # OR manually install pip install -e .After installation, you can use cq-agent from ANY directory!
# From any project directory cd /path/to/your/project cq-agent analyze .For detailed installation instructions, troubleshooting, and different installation methods, see INSTALL.md.
# For AI features (optional) pip install transformers torch# Launch the modern web interface streamlit run src/cq_agent/web/app.py # Or use the demo script python demo_ui.py # Windows: If you get connection errors, use localhost explicitly streamlit run src/cq_agent/web/app.py --server.address localhost --server.port 8501π Access the web interface:
- Windows/Local:
http://localhost:8501orhttp://127.0.0.1:8501 - Linux/Mac:
http://localhost:8501orhttp://0.0.0.0:8501
π Open your browser to: http://localhost:8501
code-quality-agent/ βββ π src/cq_agent/ β βββ π analyzers/ # Code analysis engines β βββ π ai/ # AI integration modules β βββ π cli/ # Command-line interface β βββ π graph/ # Dependency analysis β βββ π metrics/ # Quality metrics β βββ π qa/ # Q&A and search β βββ π reporting/ # Report generation β βββ π visualizations/ # Chart and graph creation β βββ π web/ # Streamlit web interface βββ π assignment-docs/ # Project documentation βββ π pyproject.toml # Dependencies and metadata βββ π README.md # This file Create a .env file in the project root:
# AI Configuration (Optional) DEEPSEEK_API_KEY=your_deepseek_api_key_here HF_TOKEN=your_huggingface_token_here HUGGINGFACEHUB_API_TOKEN=your_hf_inference_token_here # Performance Settings MAX_FILES=1000 WORKER_THREADS=4# Essential packages (auto-installed) pip install streamlit pandas plotly numpy pip install gitpython pathlib typing-extensions# For local LLM support pip install transformers torch # For enhanced semantic search pip install faiss-cpu sentence-transformers # For advanced AI agents pip install langchain langchain-community# Python analysis (auto-installed) pip install ruff bandit radon # JavaScript/TypeScript analysis (optional) npm install -g eslintThe Streamlit web interface provides a modern, interactive experience:
- Repository Path: Select or enter your codebase location
- File Limits: Configure analysis scope (default: 1000 files)
- Fast Mode: Enable for large repositories (1000+ files)
- AI Backend: Choose between DeepSeek, Local LLM, or Disabled
- π Overview: Quality metrics and summary cards
- π Issues: Filterable issue list with AI enhancements
- π File Details: Per-file analysis and code context
- π§ Autofix: Safe automated code improvements
- π€ Export: Download reports in multiple formats
- π Dependencies: Interactive dependency graphs
- π₯ Hotspots: Code complexity and churn analysis
- π Trends: Quality metrics over time
- π€ AI Q&A: Conversational codebase exploration
π‘ Windows PATH Issue? If
cq-agentcommand is not recognized, use:python -m cq_agent.cli.main analyze . # Full path python -m cq_agent analyze . # Shorter versionSee INSTALL.md for PATH troubleshooting.
# Basic analysis cq-agent analyze . # Generate reports cq-agent analyze . --md report.md --sarif security.sarif # Preview and apply autofixes cq-agent analyze . --autofix-dry-run cq-agent analyze . --autofix # AI-enhanced analysis cq-agent analyze . --deepseek# Interactive Q&A (extractive mode) cq-agent qa . # DeepSeek AI Q&A cq-agent qa . --deepseek # Local LLM Q&A cq-agent qa . --local-llm # Agentic Q&A with Hugging Face cq-agent qa . --agent --agent-backend hf --agent-model "HuggingFaceH4/zephyr-7b-beta"| Command | Description | Example |
|---|---|---|
analyze <path> | Analyze code repository | cq-agent analyze . |
--md <file> | Generate Markdown report | --md report.md |
--sarif <file> | Generate SARIF report | --sarif security.sarif |
--autofix-dry-run | Preview safe fixes | --autofix-dry-run |
--autofix | Apply safe fixes | --autofix |
--incremental | Use incremental cache | --incremental |
--no-incremental | Disable cache | --no-incremental |
--deepseek | Enable DeepSeek AI | --deepseek |
qa <path> | Interactive Q&A | cq-agent qa . |
--local-llm | Use local LLM | --local-llm |
--agent | Use agentic Q&A | --agent |
--agent-backend <type> | AI backend type | --agent-backend hf |
--agent-model <name> | AI model name | --agent-model llama3.1 |
# Fast mode for large repositories cq-agent analyze . --max-files 1000 # Incremental analysis (default) cq-agent analyze . --incremental # Fresh analysis cq-agent analyze . --no-incremental # Parallel processing cq-agent analyze . --workers 8Best for: Production use, most capable analysis
# Set API key export DEEPSEEK_API_KEY="your_key_here" # Use in web interface or CLI cq-agent analyze . --deepseek cq-agent qa . --deepseekFeatures:
- β Advanced code understanding
- β Smart severity re-ranking
- β Comprehensive Q&A responses
- β Requires API key and internet
Best for: Development, privacy, offline work
# Install dependencies pip install transformers torch # Use in web interface (select "Local LLM (Fast)") # Or CLI cq-agent qa . --local-llm --local-model "microsoft/DialoGPT-small"Features:
- β No API keys required
- β Works offline
- β Fast for development
- β Privacy-focused
- β Limited model capabilities
Best for: Custom models, inference endpoints
# Router (OpenAI-compatible) export HF_TOKEN="hf_..." cq-agent qa . --agent --agent-backend hf --agent-model "HuggingFaceH4/zephyr-7b-beta:featherless-ai" # Inference API export HUGGINGFACEHUB_API_TOKEN="hf_..." cq-agent qa . --agent --agent-backend hf --agent-model "HuggingFaceH4/zephyr-7b-beta"| Backend | Speed | Capability | Privacy | Cost | Best For |
|---|---|---|---|---|---|
| DeepSeek | βββ | βββββ | ββ | π°π° | Production |
| Local LLM | ββββ | βββ | βββββ | π° | Development |
| HF Router | βββ | ββββ | βββ | π°π° | Custom models |
| Extractive | βββββ | ββ | βββββ | π° | No AI needed |
- π Glassmorphism Theme: Modern, translucent UI elements
- π± Responsive Layout: Works on desktop, tablet, and mobile
- π¨ Interactive Charts: Plotly-powered visualizations
- β‘ Real-time Updates: Live progress bars and status updates
- π― Smart Filtering: Advanced search and filter capabilities
- Quality score gauge
- Severity distribution charts
- Language breakdown
- File count metrics
- Interactive network graphs
- Hierarchical sunburst charts
- Dependency heatmaps
- Centrality analysis
- Code complexity treemaps
- Churn vs. complexity scatter plots
- Language comparison radar charts
- Directory-level heatmaps
- Quality metrics over time
- Commit activity heatmaps
- Lines changed charts
- Issue resolution trends
- Severity distribution
- Language breakdown
- Issue categories
- File complexity
- Dependency relationships
- File connections
- Module interactions
- Import/export flows
- Code complexity by directory
- Issue density by file
- Churn patterns
- Language comparison
- Quality trends
- Commit activity
- Issue resolution
- Code growth
- Sunburst: Hierarchical dependency structure
- Treemap: File size and complexity
- Radar: Multi-dimensional language comparison
- Scatter: Complexity vs. churn analysis
# Dockerfile FROM python:3.11-slim WORKDIR /app COPY . . RUN pip install -e . EXPOSE 8501 CMD ["streamlit", "run", "src/cq_agent/web/app.py", "--server.port=8501", "--server.address=0.0.0.0"]# Build and run docker build -t code-quality-agent . docker run -p 8501:8501 code-quality-agent# .streamlit/secrets.toml DEEPSEEK_API_KEY = "your_key_here" HF_TOKEN = "your_hf_token_here"# fly.toml (already included) fly launch fly deploy# railway.json { "build": { "builder": "NIXPACKS" }, "deploy": { "startCommand": "streamlit run src/cq_agent/web/app.py --server.port=$PORT --server.address=0.0.0.0" } }# Production environment export DEEPSEEK_API_KEY="your_production_key" export HF_TOKEN="your_hf_token" export MAX_FILES="2000" export WORKER_THREADS="8" export STREAMLIT_SERVER_PORT="8501" export STREAMLIT_SERVER_ADDRESS="0.0.0.0"# Make deployment script executable (Linux/Mac) chmod +x deploy.sh # Deploy to different platforms ./deploy.sh streamlit # Streamlit Cloud (Free) ./deploy.sh railway # Railway (Free tier) ./deploy.sh render # Render (Free tier) ./deploy.sh fly # Fly.io (Free tier) ./deploy.sh local # Local Docker- π΄ Fork the repository
- π₯ Clone your fork
git clone https://github.com/your-username/code-quality-agent.git cd code-quality-agent - πΏ Create a feature branch
git checkout -b feature/amazing-feature
- π§ Set up development environment
python -m venv .venv source .venv/bin/activate pip install -e ".[dev]"
# Install development dependencies pip install -e ".[dev,local_llm,ai]" # Run tests python -m pytest tests/ # Run linting ruff check src/ black src/ # Run type checking mypy src/- π New Analyzers: Add support for more languages
- π€ AI Models: Integrate additional LLM backends
- π Visualizations: Create new chart types
- π§ CLI Features: Add new command-line options
- π± UI Improvements: Enhance the web interface
- π Documentation: Improve guides and examples
- β Ensure tests pass
- π Update documentation
- π¨ Follow code style guidelines
- π Provide clear description
- π Link related issues
π bug: Something isn't workingβ¨ enhancement: New feature or requestπ documentation: Documentation improvementsπ¨ ui/ux: User interface improvementsπ€ ai: AI-related featuresβ‘ performance: Performance improvementsπ§ maintenance: Code maintenance tasks
| Module | Purpose | Key Components |
|---|---|---|
| π Analyzers | Code analysis engines | Python, JS/TS, AST parsing |
| π€ AI | AI integration | DeepSeek, Local LLM, HF |
| π Graph | Dependency analysis | NetworkX, centrality metrics |
| π Metrics | Quality measurement | Hotspots, complexity, churn |
| π¨ Visualizations | Chart generation | Plotly, interactive graphs |
| π Reporting | Output generation | Markdown, SARIF, CSV |
| π¬ Q&A | Code search and query | TF-IDF, semantic search |
| π§ Autofix | Automated fixes | Safe transformations |
- π Repository Ingestion: Load and parse codebase
- π Analysis: Run language-specific analyzers
- π Metrics Calculation: Compute quality metrics
- π Graph Building: Create dependency relationships
- π€ AI Enhancement: Apply AI-powered insights
- π¨ Visualization: Generate interactive charts
- π Report Generation: Create output documents
# Ensure Python 3.11+ python --version # If using older version, upgrade pyenv install 3.11.0 pyenv local 3.11.0# Reinstall package pip uninstall cq-agent pip install -e . # Install optional dependencies pip install transformers torch pip install faiss-cpu sentence-transformers# Check environment variables echo $DEEPSEEK_API_KEY echo $HF_TOKEN # Set in .env file echo "DEEPSEEK_API_KEY=your_key" >> .envWindows: Connection timeout or "can't be reached"
# Use localhost explicitly (Windows doesn't work well with 0.0.0.0) streamlit run src/cq_agent/web/app.py --server.address localhost --server.port 8501 # Or use the Windows helper script .\run_web_windows.ps1 # Access at: http://localhost:8501 (NOT http://0.0.0.0:8501)General Streamlit Issues:
# Clear Streamlit cache streamlit cache clear # Check port availability (Linux/Mac) lsof -i :8501 # Windows: Check port netstat -ano | findstr :8501 # Use different port streamlit run src/cq_agent/web/app.py --server.port 8502# Check model availability python -c "from transformers import pipeline; print('Models available')" # Clear model cache rm -rf ~/.cache/huggingface/ # Use smaller model cq-agent qa . --agent --agent-model "microsoft/DialoGPT-small"- π Documentation: Check this README and inline docs
- π Issues: Search existing GitHub issues
- π¬ Discussions: Use GitHub Discussions for questions
- π§ Contact: Create a new issue for bugs or feature requests
This project is licensed under the MIT License - see the LICENSE file for details.
π Built with β€οΈ for the developer community
