7.1mediumCONDITIONAL GO

Student Gap Tracker

A longitudinal student progress dashboard that visualizes individual skill gaps across years, not just snapshot test scores.

EducationSchool principals, instructional coaches, and district data teams
The Gap

Teachers and principals talk about 'closing gaps' but have no clear visibility into whether individual students are actually progressing on specific skills year-over-year. Current state testing gives aggregate scores, not actionable per-student gap trajectories.

Solution

Platform that ingests state test results, formative assessments, and classroom data to show per-student skill gap trajectories over multiple years. Flags students who are stagnating and recommends targeted interventions based on what has worked for similar profiles.

Revenue Model

SaaS subscription per school ($5K-15K/year per building)

Feasibility Scores
Pain Intensity8/10

The pain signals are visceral and systemic. 'A lot of the same kids do about the same every year' is a damning indictment of current tools. Principals and coaches KNOW gaps aren't closing but literally cannot see the trajectory data to prove it or act on it. Every MTSS meeting involves manually cobbling together data from multiple systems. The pain is real, recurring (every assessment window), and has career consequences for principals under accountability pressure. Docking one point because some leaders have normalized the pain and may resist changing workflows.

Market Size7/10

~130K K-12 schools in the US. At $5K-15K/building, the addressable market is $650M-$1.95B if every school bought it. Realistically, Title I schools and districts with accountability pressure are the primary buyers — roughly 50K schools, giving a serviceable addressable market of $250M-$750M. That's more than enough for a venture-scale business. District-level sales could push average deal size higher. Docking points because the post-ESSER budget environment makes new tool adoption harder, and you're competing for wallet share against entrenched vendors.

Willingness to Pay6/10

$5K-15K/building is in the right range — districts spend $15-40/student across assessment tools, and this would represent $1-3/student as an analytics layer on top. The challenge: districts are consolidating tools post-ESSER, not adding new ones. You need to either replace something or prove ROI so clearly that it justifies incremental spend. Title I and school improvement grant funding can cover this. Willingness to pay increases dramatically if you can show 'schools using this closed X% more gaps' — but you won't have that data at launch. Pilot-friendly pricing ($2-3K for a semester pilot) would be essential.

Technical Feasibility5/10

The PRODUCT is technically buildable by a solo dev in 4-8 weeks as an MVP dashboard. The HARD PART is data ingestion. State test data comes in wildly different formats across 50 states and dozens of assessment vendors. There is no universal API for student assessment data — you'll be dealing with CSV exports, SIS integrations (PowerSchool, Infinite Campus, Skyward all have different APIs), and assessment platform APIs (NWEA, Renaissance each have their own). Ed-Fi is an emerging standard but adoption is uneven. A realistic MVP would need to target ONE state's test format + ONE major assessment platform (e.g., MAP Growth in Illinois) and manually handle data import. The ML-based intervention matching is a v2+ feature. FERPA compliance, data security requirements, and district IT approval processes add significant overhead.

Competition Gap8/10

This is the strongest signal. After analyzing all major competitors: NONE provide multi-year individual skill-gap trajectory visualization. Every tool shows point-in-time snapshots or aggregate growth scores, but no one answers 'how has THIS student's gap in fractions evolved from 3rd grade through 6th grade?' NONE do ML-based similar-student intervention matching (Branching Minds comes closest but is rule-based). NONE specifically flag stagnation as distinct from decline. This is a genuine gap in a mature market — rare and valuable. The risk is that Renaissance or NWEA could build this as a feature update, but large incumbents are slow and acquisition-focused.

Recurring Potential9/10

Textbook SaaS subscription model. Assessment data is generated 3-6 times per year (fall/winter/spring benchmarks + state tests), creating natural engagement cycles. The longitudinal value proposition INCREASES over time — the more years of data, the more valuable the trajectories. This creates strong retention and switching costs. Annual school/district contracts are the norm in EdTech. District-level deals provide predictable, chunky ARR. Budget cycles are annual and predictable (July 1 fiscal year for most districts).

Strengths
  • +Genuine gap in a mature market — no competitor provides multi-year individual skill-gap trajectories, ML-based intervention matching, or stagnation flagging
  • +Pain is validated by practitioners (the Reddit thread signals are textbook problem-solution fit quotes)
  • +Strong recurring revenue model with increasing value over time (more data = better trajectories = stickier product)
  • +Large addressable market ($250M+ SAM) with established purchasing patterns and budget line items
  • +Consolidation of incumbents into bloated suites creates opening for focused, elegant tools
  • +Accountability pressure on principals/districts creates urgency — this is career-relevant data
Risks
  • !Data ingestion is the make-or-break challenge — 50 states, dozens of assessment vendors, no universal API. This is an integration nightmare that will consume most of your engineering time
  • !Post-ESSER budget cliff means districts are cutting tools, not adding them. Timing is tough for a new EdTech purchase
  • !K-12 sales cycles are brutally long (6-18 months from first contact to signed contract), often requiring board approval, IT security review, and FERPA compliance documentation
  • !FERPA/student data privacy requirements create legal overhead and limit your ability to aggregate cross-district data for ML features
  • !Renaissance or NWEA could ship this as a feature update — you'd be fighting a roadmap item, not a market gap
  • !The founder needs deep K-12 relationships to get past gatekeepers. Cold outreach to districts has abysmal conversion rates
Competition
Renaissance Star Assessments

Computer-adaptive K-12 assessment suite

Pricing: $4-8/student/year for Star alone; $10-25+/student for full suite; district contracts $50K-$500K+
Gap: No multi-year individual skill-gap trajectory visualization — only point-in-time skill snapshots. No ML-based 'students like yours benefited from X' intervention matching. No proactive stagnation flagging — teachers must manually interpret flat growth lines. Platform bloat from acquisitions makes it overwhelming.
NWEA MAP Growth

Gold-standard computer-adaptive interim assessment using the RIT scale

Pricing: $10-15/student/year; mid-size district (10K students
Gap: Skill-level granularity over multiple years is limited — Learning Continuum shows current readiness, not how specific skill gaps evolved. No ML-based similar-student intervention recommendations. Growth projections flag 'below expected' but don't specifically detect stagnation on discrete skills. Expensive for smaller districts.
Panorama Education

Student success platform that aggregates academic, behavioral, attendance, and SEL data into a unified view. Best known for social-emotional learning surveys. Includes early warning system for at-risk identification and MTSS workflow management with intervention tracking.

Pricing: $3-6/student/year for core platform; district contracts $30K-$150K
Gap: Not an assessment platform — depends entirely on imported data quality for skill analysis. No native skill-level diagnostic or trajectory view. Intervention recommendations are rule-based (if risk factor X, suggest Y), not ML-driven profile matching. Early warning flags decline but not stagnation specifically.
Branching Minds

Purpose-built MTSS/RTI platform focused on intervention management. Aggregates data from MAP, Star, DIBELS, etc. to identify at-risk students. Features a curated evidence-based intervention library matched to student needs, plus collaborative tools for support teams.

Pricing: $4-7/student/year; district contracts $25K-$100K
Gap: Intervention matching is rule-based (skill area → intervention), NOT ML-driven lookalike modeling ('students with similar profiles saw best results from X'). No multi-year skill gap trajectory visualization — shows current tier and recent progress only. Limited stagnation detection. Smaller company, less brand trust with district buyers.
Curriculum Associates i-Ready

Adaptive diagnostic assessment + personalized instruction platform for Reading and Math. Provides detailed skill-level diagnostics placing students on a developmental continuum, with built-in instructional content

Pricing: $6-12/student/year for diagnostic + instruction; district contracts vary widely
Gap: Diagnostic snapshots are per-window, not longitudinal skill-gap trajectories across years. No ML-based intervention matching using similar student profiles. Focused on its own ecosystem — poor at integrating external assessment data. No stagnation-specific alerting. Walled garden approach limits flexibility.
MVP Suggestion

Target ONE state (e.g., a state with publicly accessible test score formats like Texas STAAR or Ohio) + ONE major assessment platform (NWEA MAP via their API). Build a dashboard that ingests 3+ years of MAP Growth data and state test results via CSV upload, visualizes per-student skill-gap trajectories on a timeline, and flags students whose specific skill gaps have not improved across 2+ testing windows. Skip the ML intervention matching for v1 — just show the trajectory visualization and stagnation flags. Deploy as a pilot with 3-5 friendly schools (find them through principal networks or EdTech communities). The 'aha moment' is when a principal sees a student who has been in reading intervention for 2 years with zero movement on inferencing skills — that visual alone sells the product.

Monetization Path

Free CSV upload tool for individual teachers (lead gen, builds word of mouth) → Paid school license at $3K-5K/year for full dashboard + stagnation alerts (land) → District license at $5-15K/building with SIS integration, cross-school views, and admin reporting (expand) → Premium tier with ML-powered intervention recommendations once you have enough data to train models ($15-25K/building) → Platform play where you become the 'analytics layer' that sits on top of whatever assessment tools a district uses

Time to Revenue

4-6 months to first paid pilot, 9-12 months to first annual contract. Breakdown: 6-8 weeks for MVP build, 4-6 weeks to land 3-5 free pilot schools, 8-12 weeks of pilot usage to generate compelling before/after data, then 4-8 weeks to convert pilots to paid. K-12 budget cycles mean if you miss the spring purchasing window (Feb-May), you wait until the following year. Target launching pilots in fall so you have data for spring budget conversations.

What people are saying
  • a lot of kids that are in basically the same place as they were in August
  • A lot of the same kids do about the same every year
  • I wonder if we have hit soft ceiling of our school
  • plenty of programs, placements, and meetings about struggling kids. It all seems to go nowhere in terms of outcomes