Engineering teams struggle to make the business case for rewrites because managers default to 'never rewrite' conventional wisdom, with no data to challenge that assumption.
Static analysis tool that scans a codebase for hardcoded secrets, dependency rot, test coverage gaps, infrastructure fragility signals, and bus-factor risk, then generates a 'rewrite ROI report' with estimated maintenance cost trajectory vs. rewrite cost.
SaaS subscription — free tier for single repo scan, paid tiers for org-wide continuous monitoring and historical trend dashboards
The pain is real and frequently expressed — the Reddit thread and countless like it confirm engineers feel this acutely. However, it's an intermittent strategic pain, not a daily operational one. Teams hit this wall every 1-2 years per legacy system, not every sprint. The person with the pain (engineer) is often not the buyer (manager), which dilutes urgency at the point of purchase.
TAM is meaningful but not massive. Target is staff+ engineers and engineering managers at companies with 50-500+ engineers that have legacy systems (most companies over 5 years old). Estimated ~50K-100K potential org accounts globally. At $500-2K/month average, that's a $300M-2B theoretical TAM. However, realistic serviceable market is much smaller — this is a niche within developer tools, not a horizontal platform.
This is the weakest link. The free scan will get adoption, but converting to paid is hard because: (1) the output is a one-time report per codebase, not a daily-use tool, (2) the buyer (manager) may not want data that argues against their position, (3) companies that need this most (smaller, resource-constrained) are least able to pay, (4) enterprises that can pay already have CAST Highlight or consultants. The continuous monitoring angle helps but needs strong proof of value over time.
A solo dev can absolutely build a compelling MVP in 4-8 weeks. The core is composing existing analysis capabilities: secret scanning (gitleaks/trufflehog libs), dependency age checking (package manager APIs), test coverage parsing, git log analysis for bus factor, and complexity metrics. The novel value is the aggregation, scoring model, and report generation — not the individual analyses. LLMs can help generate the narrative report. Main risk is calibrating the scoring model to be credible.
No one owns the 'rewrite business case' output specifically. SonarQube and CodeScene provide inputs but not the synthesis. CAST Highlight is closest but is enterprise-only and top-down. The gap is a bottom-up, self-serve tool that an individual engineer can run on a repo and get a manager-ready report in 10 minutes. That specific workflow doesn't exist today.
Challenging. The core use case is episodic — you scan a codebase, get a report, make a decision. Continuous monitoring adds recurring value but is a harder sell because the report doesn't change dramatically week-to-week. Org-wide portfolio views and trend dashboards create stickier value but require significant additional development. Risk of being a 'run once, cancel subscription' tool unless the monitoring story is compelling.
- +Clear competition gap — no one produces a 'rewrite ROI report' as a self-serve product
- +Bottom-up adoption model (engineer runs scan, shares report) is a proven GTM motion in dev tools
- +Technically feasible MVP with mostly existing open-source components to compose
- +Strong emotional resonance — engineers viscerally relate to this problem, which drives organic sharing
- +AI tailwind — as AI-assisted rewrites become viable, demand for rewrite justification data increases
- !Willingness-to-pay gap: the person with the pain (engineer) rarely holds the budget, and the budget holder (manager) may resist the tool's conclusions
- !One-shot usage pattern: users may scan once, get their report, and never return — making SaaS retention very difficult
- !Credibility problem: if the scoring model isn't well-calibrated, a single bad recommendation destroys trust and the tool becomes shelfware
- !Enterprise sales gravity: the highest-value customers (large orgs with many legacy systems) will want SSO, on-prem, SOC2, and custom integrations — pulling you toward enterprise sales before you're ready
- !Rewrite decisions are political, not just analytical — a report alone rarely changes minds when organizational dynamics are the real blocker
Static code analysis platform that detects bugs, vulnerabilities, and code smells across 30+ languages. Tracks technical debt in time-to-fix metrics.
Automated software portfolio analysis that scores applications on health, cloud readiness, and risk. Used for M&A due diligence and modernization planning.
Developer-first security platform focused on finding and fixing vulnerabilities in code, dependencies, containers, and IaC.
Behavioral code analysis that identifies hotspots, complexity trends, developer coupling, and organizational risk in codebases using git history.
Engineering intelligence platforms that measure developer productivity, cycle time, and investment allocation across teams and projects.
CLI tool + web report. User runs a single command pointing at a git repo. Tool analyzes: (1) dependency ages and known vulnerabilities, (2) hardcoded secrets detected, (3) test coverage if parseable, (4) bus factor via git blame concentration, (5) complexity hotspots, (6) build/deploy configuration fragility signals. Outputs a branded PDF/HTML 'Legacy Risk Report' with an overall risk score (A-F), category breakdowns with specific findings, estimated annual maintenance cost trajectory based on complexity trends, and a one-page executive summary designed to be forwarded to a manager. Free for one repo, login-gated for the PDF export.
Free CLI with terminal output → Free account for web-hosted reports (lead capture) → $49/month Pro for unlimited repos, PDF exports, and historical comparisons → $299/month Team for org-wide portfolio dashboard and Slack/Jira integration → $999+/month Enterprise for SSO, on-prem, API access, and custom scoring weights. Consider a one-time report purchase option ($19-49) for engineers who only need one report, to capture the long tail that won't subscribe.
8-12 weeks to first dollar. Weeks 1-4: build CLI + report generation MVP. Weeks 5-6: add web report hosting and Stripe integration. Weeks 7-8: launch on Hacker News, Reddit (r/ExperiencedDevs, r/programming), and dev Twitter. The topic is inherently viral in engineering communities. First paying customers likely within 2-4 weeks of launch if the free report is compelling enough to share. Reaching $1K MRR will take 3-6 months and require solving the retention problem.
- “hard coded secrets in the frontend”
- “infrastructure goes down all the time”
- “bandaid project for contractors”
- “The engineer who originally built it is long gone”
- “manager was adamant that we don't rewrite software”
- “loss of institutional knowledge”