DeepSource’s cover photo
DeepSource

DeepSource

Software Development

San Francisco, California 5,339 followers

The AI code review platform for fast-moving teams and their agents.

About us

DeepSource is the AI Code Review Platform — built for engineering teams shipping code at the speed of AI. We combine battle-tested static analysis infrastructure with a deep AI review agent in a single hybrid engine. 5,000+ static analyzers build program intelligence — data-flow graphs, taint maps, reachability analysis — that grounds an AI agent capable of finding vulnerabilities other tools miss. 82% accuracy on real-world CVEs, the highest on the OpenSSF benchmark. More than code review: DeepSource is a complete platform for code quality and security. Static + AI analysis, secrets detection, code coverage tracking, software composition analysis, license compliance, baseline issue tracking, OWASP/SANS reporting, and flexible PR gates. One platform replacing a stack of point solutions. Native MCP support means DeepSource works with any AI coding agent your team uses — Cursor, Claude Code, Windsurf, Copilot — giving agents the feedback loop they need to write better code autonomously. Trusted by 6,000+ companies including Visa, Ancestry, Twilio, and WEX. SOC 2 Type II certified. Deploy on our cloud or self-host on your infrastructure. The way code gets written has changed. The way it gets reviewed should too.

Website
https://deepsource.com
Industry
Software Development
Company size
11-50 employees
Headquarters
San Francisco, California
Type
Privately Held
Founded
2018
Specialties
Developer Tools, Continuous Quality, Static analysis, Code quality, Code reviews, SCA, AI, DevSecOps, AI Code Review, and AI Code Security

Locations

Employees at DeepSource

Updates

  • New: Block PRs only when it matters. Quality Gates now let you filter by severity and category, so you can fail builds on critical security issues without blocking on minor style problems. Works at the repo level and group level. Set different configurations for different teams.

    • No alternative text description for this image
  • DeepSource reposted this

    We're live with our Deep Dive on DeepSource with Co-Founder and CEO Sanket S.! DeepSource - The Vulnerabilities Every AI Tool Missed ⚔️ DeepSource is the AI code review platform that uses a hybrid analysis engine — combining seven years of static analysis infrastructure with an AI review agent — to catch security vulnerabilities and code quality issues that no other tool can find. On the public OpenSSF CVE Benchmark, DeepSource leads on overall accuracy at 82.42%, ahead of every major player in AI code review. The company has over 1 million repositories connected and has done this on $7.7M in total funding. In this conversation, Sanket shares why nobody has a mental model of their code anymore, the day he realized that static analysis alone was dead, and the 7-year infrastructure moat that just outperformed every major player in AI code review. Link below 👇

    • No alternative text description for this image
  • Meet the new DeepSource's CLI, built to make it easier for your AI coding agent to work with our code review results. Once the CLI is installed, get the DeepSource skill and just ask your agent to monitor DeepSource's review on a PR and fix. The CLI provides several flags to get details of the review — by category, severity, or per file. Install the skill: https://lnkd.in/gjf5NbEq Read the full changelog: https://lnkd.in/gNABUVa6

    • No alternative text description for this image
    • No alternative text description for this image
  • We're excited to welcome Phillip Mitto to our GTM team in our SF office. Originally hailing from Connecticut, Phil is a soccer fanatic, Fulham F.C. fan, and an avid traveller. Bragging rights? He's hiked the Inca Trail to Machu Picchu.

    • Phillip Mitto, in a gray suit, smiles confidently against a dark background, expressing excitement about joining the DeepSource team.
  • Every AI code review vendor publishes a benchmark. Every single one wins their own. We surveyed every public benchmark in the space: Greptile, Qodo, Augment, Propel, Macroscope, Entelligence. What each one measures, how, and where the gaps are. The most concrete example of the problem: Augment ran their evaluation on the exact same 5 repos Greptile used. Greptile's self-reported recall was 82%. In Augment's eval, it was 45%. Same repos, completely different rankings. This isn't necessarily bad faith. When you design a benchmark, you make dozens of small decisions: repo selection, what counts as a real bug, how to score partial matches. Vendors naturally optimize for their own criteria, consciously or not. Pharmaceutical companies don't run their own clinical trials for the same reason. A trustworthy benchmark needs real bugs (not LLM-injected ones), published datasets anyone can reproduce, blind evaluation where the judge doesn't know which tool produced which output, and enough data points that a few edge cases don't swing the results. Most published benchmarks in this space fail at least one of these. We published our own benchmarks too. We deliberately limit to security because CVEs have clear ground truth. A vulnerability either exists or it doesn't. We ran it ourselves, and the full dataset and results are open-source for anyone to verify. Full breakdown: https://lnkd.in/g5aB4Vfi

  • Introducing DeepSource AI Code Review. We use a hybrid static analysis + AI review agent to catch quality and security issues with high accuracy, detecting more vulnerabilities than LLM-only reviewers and static-only tools. --- > Static analysis-only review is limited by the checkers the tool comes with and is prone to high false positive rate. > LLM-only analysis gives much better depth but is unpredictable and suffers from inattentional blindness — if the model isn't primed to look for a vulnerability class, it won't find it. > A hybrid approach solves this. We give the agent a baseline of statically found issues so it knows where to look. But more importantly, we give it structured ways to explore the code, Not just grep. Data-flow graphs, control-flow graphs, taint maps, reachability analysis. The agent doesn't read your code line by line hoping to notice something. It navigates it semantically. The benchmarks back this up. On the OpenSSF CVE Benchmark (165 real vulnerabilities in production JS/TS) our hybrid engine hits 82.42% accuracy. Higher than OpenAI Codex, Devin, BugBot, Claude Code, CodeRabbit, Greptile and Semgrep. Try it out with a 14-day free trial (link in comments).

  • 🌟 We're launching Autofix Bot today: the AI agent purpose-built for deep code review. With highest accuracy score on the OpenSSF CVE Benchmark, it's the best way for you to write clean and secure code with AI today. We spent the last 6 years building a deterministic, static-analysis-only code review product. Earlier this year, we started thinking about this problem from the ground up and realized that static analysis solves key blind spots of LLM-only reviews. Over the past six months, we built a new ‘hybrid’ agent loop that uses static analysis and frontier AI agents together to outperform both static-only and LLM-only tools in finding and fixing code quality and security issues. Today, we’re opening it up publicly. Here’s how the hybrid architecture works: - Static pass: 5,000+ deterministic checkers (code quality, security, performance) establish a high-precision baseline. A sub-agent suppresses context-specific false positives. - AI review: The agent reviews code with static findings as anchors. Has access to AST, data-flow graphs, control-flow, import graphs as tools, not just grep and usual shell commands. - Remediation: Sub-agents generate fixes. Static harness validates all edits before emitting a clean git patch. Static solves key LLM problems: non-determinism across runs, low recall on security issues (LLMs get distracted by style), and cost (static narrowing reduces prompt size and tool calls). On the OpenSSF CVE Benchmark (200+ real JS/TS vulnerabilities), we hit 81.2% accuracy and 80.0% F1; vs Cursor Bugbot (74.5% accuracy, 77.42% F1), Claude Code (71.5% accuracy, 62.99% F1), CodeRabbit (59.4% accuracy, 36.19% F1), and Semgrep CE (56.9% accuracy, 38.26% F1). On secrets detection, 92.8% F1; vs Gitleaks (75.6%), detect-secrets (64.1%), and TruffleHog (41.2%). We use our open-source classification model for this. You can use Autofix Bot interactively on any repository using our TUI, as a plugin in Claude Code, or with our MCP on any compatible AI client (like OpenAI Codex). We’re specifically building for AI coding agent-first workflows, so you can ask your agent to run Autofix Bot on every checkpoint autonomously.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • New: Hybrid AI Agent for Secrets Detection 🔒 ✨ We've released a new detection engine for our Secrets Analyzer that finds more valid secrets in your source code while greatly reducing false-positives. This makes DeepSource the best way to run secrets analysis on your code. Powered by our open-source Narada classification model, the Secrets Analyzer is now way more smarter — 97% precision, 93% reduction in false positives, and 96.3% recall on our benchmarks. The new detection engine is available to all customers on DeepSource Cloud. Team administrators can enable it by navigating to Settings → General → Preferences in their team settings and selecting the Hybrid AI Agent engine.

    • Dark settings panel titled "Secrets Analyzer" with options showing Legacy and selected Hybrid AI Agent for secret detection in code.

Similar pages

Browse jobs

Funding

DeepSource 2 total rounds

Last Round

Undisclosed

US$ 5.0M

See more info on crunchbase