Engineering teams can't make data-driven arguments for rewrites — managers default to 'never rewrite' because there's no objective measure of how costly maintaining a legacy system actually is.
Scans codebases for hardcoded secrets, dead code, dependency rot, infra fragility, and operational incident frequency, then generates a cost-of-ownership report with projected rewrite ROI to present to stakeholders.
Subscription — tiered by number of repos/services scanned, starting at ~$200/mo for teams
The Reddit thread and pain signals are visceral — 'bleeding money before management approves a rewrite' and 'manager was adamant that we don't rewrite' are real, recurring conversations at every mid-to-large engineering org. The pain is political as much as technical: engineers KNOW the system is rotting but can't prove it in business terms. This is a chronic, unresolved pain that causes attrition, incidents, and slow delivery. Deducting points only because not every team faces this at any given moment — it's episodic per team.
The specific 'rewrite decision support' niche is narrow — maybe 10,000-50,000 potential teams globally at mid-to-large companies with legacy services and budget authority. At $200-500/mo that's a $24M-$300M addressable market. Decent for a bootstrapped/small business but not venture-scale without expanding scope. The adjacent market (general code health, engineering intelligence) is much larger but more competitive. Score reflects the niche being real but bounded.
This is the biggest risk. Platform engineering leads have budget, but $200/mo for a 'report' feels like it needs to deliver undeniable value on first scan. The tool competes with 'I'll just spend a weekend pulling data from SonarQube, Snyk, and PagerDuty into a Google Slides deck' — which is what most teams do today for free. Enterprise buyers ($1K+/mo) would pay but have longer sales cycles. The willingness-to-pay gap: the person who feels the pain (engineer) often isn't the budget holder, and the budget holder (VP Eng) may not want to fund a tool that tells them they need to spend MORE money on rewrites.
An MVP scanning for hardcoded secrets (regex + entropy), dead code (AST analysis per language), dependency staleness (package manifest parsing), and generating a PDF report is buildable in 6-8 weeks by a strong solo dev. However, the hard parts are: (1) multi-language support — each language needs its own dead code and complexity analysis, (2) incident frequency integration requires API connections to PagerDuty/OpsGenie/etc., (3) the ROI projection model needs to be credible or the whole product falls apart. Deducting for the ROI model — getting that right is more data science than engineering, and if the numbers feel made up, trust evaporates instantly.
This is the strongest dimension. No existing tool translates code health signals into a stakeholder-ready business case with ROI projections. SonarQube speaks to developers. CAST is enterprise-priced and slow. CodeScene is behavioral but not financial. The specific gap — 'generate a rewrite business case automatically' — is genuinely unoccupied. The risk is that SonarQube or CodeScene could add this feature as a module faster than a startup can gain traction.
Code health changes over time, so periodic re-scanning has value. Teams could run monthly/quarterly reports to track whether technical debt is growing or shrinking, and to re-justify continued investment. However, the core use case is episodic — you need the rewrite report when you're making the case, not every month. Recurring value depends on positioning as ongoing 'technical debt monitoring with business translation' rather than a one-time report generator. If it's just a report, teams churn after getting what they need.
- +Genuinely unoccupied niche — no tool translates code health into business-case ROI for non-technical stakeholders
- +Intense, emotionally-charged pain point with strong organic signal (Reddit threads, blog posts, conference talks about rewrite justification)
- +The buyer persona (platform engineering leads) is well-defined and increasingly has dedicated budget
- +Aggregating signals from multiple domains (secrets, dead code, dependencies, incidents) into one score is a defensible product surface
- +Low-cost MVP possible — start with 1-2 languages and the most impactful signals
- !The ROI projection model must be credible or the product is worse than useless — a bad estimate destroys trust faster than no estimate. This is the make-or-break technical challenge.
- !Willingness to pay is unproven. The person feeling the pain may not have budget authority, and the budget holder may not want a tool that argues for expensive rewrites.
- !Incumbent risk: SonarQube, CodeScene, or GitHub could add a 'business case' report layer as a feature, not a product. They have the code analysis data already.
- !Multi-language support is a long tail problem — each language needs bespoke dead code and complexity analysis, which slows expansion.
- !Episodic use case threatens retention: teams buy it, get their report, convince management, then churn. The 'monitoring' angle needs to be strong enough to retain.
Static code analysis platform that measures technical debt in time-to-fix units, tracking bugs, vulnerabilities, code smells, duplication, and test coverage across 30+ languages.
Behavioral code analysis using version control history to identify hotspots, complexity trends, developer knowledge distribution, and team coupling patterns. Assigns a Code Health score.
Enterprise portfolio analysis platform that scans applications for cloud readiness, open-source risk, technical debt, and software health. Used in M&A due diligence and modernization planning.
Automated code review platform providing maintainability grades
Application modernization platform that uses runtime analysis to identify architectural domains and boundaries in monolithic applications, helping plan microservices decomposition.
Single-language scanner (Python or TypeScript — highest legacy pain) that connects to a GitHub repo and generates a PDF 'Codebase Health & Rewrite Business Case' report. MVP signals: hardcoded secrets count, dead code percentage, dependency staleness score (avg days behind latest), cyclomatic complexity hotspots, and a simple cost model ('at current incident rate of X/month and estimated engineer-hours spent maintaining, projected annual cost of maintenance is $Y vs. estimated rewrite cost of $Z'). Skip incident integration for MVP — let users manually input incident frequency. The PDF must look executive-ready: charts, red/yellow/green scoring, dollar figures prominently displayed. The output IS the product.
Free single-repo scan (lead gen, show value immediately) -> $199/mo Team plan (5 repos, monthly re-scans, trend tracking) -> $499/mo Pro plan (unlimited repos, incident API integrations, custom ROI models, Jira/Linear integration for tracking remediation) -> $2K+/mo Enterprise (SSO, on-prem scanning, portfolio-level dashboards, API access, custom reporting). Long-term: consulting-assisted 'Rewrite Readiness Assessment' as a high-touch upsell at $5K-$15K per engagement.
8-12 weeks to MVP, 12-16 weeks to first paying customer. The free scan is the acquisition engine — if the PDF report is compelling, conversion to paid monitoring happens when teams want to track progress post-rewrite-approval. First $1K MRR likely at month 4-5. Path to $10K MRR in 9-12 months if the ROI model resonates and word-of-mouth kicks in among platform engineering communities.
- “hard coded secrets in the frontend”
- “infrastructure goes down all the time”
- “manager was adamant that we don't rewrite software”
- “bleeding money before management approves a rewrite”
- “no department actually wanted to maintain it”