Current security scanners (including AI-enhanced ones) flag vulnerabilities based on string matching—finding 'log4j' in comments, old docs, or unreachable transitive dependencies—generating massive false positive noise that developers must manually verify.
Static analysis tool that builds a dependency graph and call-path analysis to determine if a flagged vulnerability is actually reachable at runtime. Integrates into CI/CD and wraps existing scanners (Snyk, Dependabot, etc.) to filter their output down to genuinely exploitable findings.
This is a genuine, severe pain point. Security teams at companies with 100+ repos routinely face 10,000+ vulnerability findings, 70-90% of which are noise. Manual triage burns weeks of engineering time per quarter. The Reddit signal is real—this frustration is universal and growing as dependency trees deepen.
TAM for application security tooling is $15-20B+. The 'vulnerability prioritization/noise reduction' segment is $2-5B and growing 25%+ YoY. Every company with a CI/CD pipeline and security requirements is a potential customer—that's hundreds of thousands of organizations.
Security budgets are large and growing. Companies already pay $50-500K/year for Snyk, Veracode, etc. A tool that demonstrably reduces triage time by 70%+ has clear, quantifiable ROI. Security is one of the few categories where enterprises will pay premium prices without extensive negotiation.
This is where the idea breaks down for a solo dev. Building accurate cross-language call graph analysis is a PhD-level compiler engineering problem. You need to handle: dynamic dispatch, reflection, dependency injection frameworks, annotation processors, build system plugins, multiple package managers, monorepos, polyglot codebases, and partial program analysis. Endor Labs has 100+ engineers working on this. An MVP that wraps existing scanners and adds basic reachability for ONE language (e.g., Java with Maven) is possible in 8-12 weeks, but it would be shallow compared to funded competitors.
This is the critical problem: the exact value proposition—reachability-based vulnerability filtering—is already the core thesis of multiple well-funded startups (Endor Labs $70M, Rezilion $30M, Oligo $28M) AND a feature being added by incumbents (Snyk, Semgrep, GitHub). The 'wrapper around existing scanners' angle has some differentiation, but Endor Labs already positions itself similarly. A bootstrapped solo dev would be competing against $100M+ in aggregate VC funding solving the same problem.
Perfect subscription fit. Vulnerability scanning is continuous—new CVEs daily, code changes constantly, dependencies update frequently. Once integrated into CI/CD, switching costs are high. Security tools have some of the lowest churn rates in SaaS (5-8% annually).
- +The pain is real, acute, and well-validated—vulnerability noise is a top 3 complaint from every DevSecOps team
- +The 'wrapper' positioning (augmenting Snyk/Dependabot rather than replacing) reduces adoption friction and could be a wedge
- +Strong willingness-to-pay in security with clear ROI story (hours saved on triage × engineer cost)
- +Free tier for OSS is a proven developer adoption flywheel
- +Regulatory pressure (SOC2, FedRAMP, etc.) forces companies to address findings, making noise reduction increasingly valuable
- !CRITICAL: At least 3 startups with $30-70M+ in funding are building exactly this. You'd be bringing a knife to a gunfight.
- !CRITICAL: Snyk and GitHub (Dependabot) are adding reachability features to their existing platforms, which have massive distribution advantages
- !Technical moat is deep—accurate reachability analysis across languages/frameworks requires years of engineering investment
- !Enterprise sales cycle for security tools is 3-6 months, which is brutal for a bootstrapped founder
- !The 'wrapper' approach creates dependency risk—if Snyk improves their own reachability, your value proposition evaporates overnight
SCA platform that uses function-level reachability analysis to determine if vulnerable dependencies are actually called in your code. Builds call graphs across transitive dependencies.
Vulnerability validation platform that uses runtime analysis and binary-level reachability to determine if vulnerabilities are actually exploitable in your deployed environment.
Runtime application security that uses eBPF-based monitoring to determine which open-source libraries are actually loaded and executing at runtime, filtering out unused vulnerable dependencies.
Market-leading SCA tool that added a 'reachable vulnerabilities' feature in 2023-2024, tagging findings as 'reachable' or 'not reachable' based on static call graph analysis.
Semgrep's supply chain security product that combines SAST with dependency analysis, using Semgrep's cross-file dataflow engine to determine if vulnerable code paths are reachable from application code.
If proceeding despite competition: Build a GitHub Action / CLI tool that wraps Dependabot/Snyk output for Java (Maven/Gradle) projects ONLY. Use existing open-source call graph tools (OPAL/WALA for Java, or tree-sitter for lightweight parsing) to build a basic reachability check. Output a filtered report showing 'reachable' vs 'unreachable' findings. Target: reduce findings by 50%+ with <5% false negative rate. Ship as a free OSS tool first to validate accuracy before charging.
Free OSS CLI tool → Free GitHub Action with usage limits → Paid SaaS dashboard with team features, historical tracking, and policy enforcement ($29-99/repo/month) → Enterprise tier with SSO, audit logs, custom integrations ($500-2000/month/org) → Potential acquisition target by Snyk, GitHub, or Palo Alto Networks if you gain traction
6-9 months minimum. 4-8 weeks for a basic Java-only MVP, then 2-3 months to validate accuracy with real users, then 2-3 months to build SaaS features worth paying for. Enterprise sales adds another 3-6 months. Realistically 12+ months to meaningful revenue given the enterprise-heavy buyer profile.
- “flag log4j as critical in a codebase that never even imports the library”
- “two were just comments in old migration docs and one was a transitive dependency that never actually loads”
- “had to manually verify all three instead of trusting the automated findings”
- “Actual reachability analysis is hard; vibes-based flagging is not”