Designing an 11+ Learning Platform: What a Serious Pre-Build Architecture Looks Like
We were asked to build an adaptive learning platform for 11+ exam preparation. Before writing a single line of code, we ran a structured discovery process that produced 22 specification documents across 10 phases. This article describes that process and what it produced.
To be clear about what this is: a design blueprint. The platform has not been built yet. No code exists. No database is running. What does exist is a complete architectural specification, a competitive analysis of 11 platforms, a detailed cost model, and a question-level taxonomy of the client's existing teaching materials. This is what serious pre-build preparation looks like.
Why We Did Discovery Before Writing Code
The client is a UK teacher and tutor with a distinctive pedagogical approach to 11+ maths preparation. They wanted to scale from one-to-one tutoring to a digital platform. The ambition was clear: compete with established platforms like Atom Learning (SoftBank-backed, 75,000+ users) at a lower price point with better personalisation.
This is a serious build. The estimated MVP cost sits between £35,000 and £55,000. At that investment level, you don't start by writing React components. You start by understanding the domain, analysing the competition, studying the client's existing materials, and designing the system architecture before a developer opens their IDE.
That's what we did. The discovery process took the client's brief, a 52-question mock exam, and an institutional document standard, and turned them into a build-ready specification.
The 22-Document Output
The discovery process produced 22 documents across 10 phases. Each phase builds on the previous one. Nothing is speculative. Every design decision traces back to a source: the client's brief, the mock exam analysis, the competitive research, or the domain mapping.
Phase 1: Source Intake
Factual extraction from three primary sources. We logged 13 assumptions requiring client validation and 15 unknowns requiring resolution. Assumptions and unknowns are separated from facts. This distinction matters because it tells the client exactly where we're guessing and where we need their input.
Phase 2: Domain Research
A complete map of the UK 11+ landscape. Exam boards (GL Assessment, CEM, CSSE, ISEB), regional variations, pass rates, qualifying thresholds, typical preparation timelines. Roughly 100,000 children sit the 11+ each year. The UK tutoring market is valued at approximately £2 billion and growing at 11.8% annually for online provision. This context shaped every subsequent design decision.
Phase 3: Competitive Analysis
We analysed 11 competitors across three tiers. The top tier includes Atom Learning, Bond Online, and Explore Learning. The mid tier includes CGP, Kumon, and several others. We mapped features, pricing, content volume, and target audience for each.
The analysis identified 10 specific market gaps. No competitor combines teacher-branded personalisation with meaningful gamification and affordable family pricing. Most platforms use multiple-choice only (no free-response assessment). Gamification is uniformly weak. Per-child pricing makes existing platforms expensive for families with multiple children.
Phase 4: Competitor Teardown
We conducted a deep teardown of the market leader. Company profile, publicly observable technology indicators, UX analysis, pedagogy loop, and feature inventory. Critically, we maintained a strict observed-versus-inferred separation. Every claim is tagged as either directly observable (public website, published materials) or inferred (based on behaviour, job postings, or technical indicators). We don't present guesses as facts.
Phase 5: Mock Exam Taxonomy
The client provided a 52-question mock exam. We classified every question by topic, question type, difficulty band, and image dependency. From this classification, we identified 15+ question families: groups of questions that share a structural pattern but differ in their specific parameters.
We also reverse-engineered the teacher's pedagogical style. Naming conventions, contextual framing (real-world scenarios), difficulty progression patterns. This analysis is the basis for the question generation system: the platform would produce unlimited variants of each question family while preserving the teacher's voice.
Phase 6: Learning Engine Design
The learning engine specification covers four components: a 30-question diagnostic assessment that establishes a baseline mastery score per topic, a mastery tracking model using a rolling 30-question window, a next-best-question targeting algorithm that selects the highest-impact question based on current weaknesses, and a weakness detection system that distinguishes between careless errors and conceptual gaps.
All of this is rules-based, not AI-driven. The adaptive algorithm uses deterministic logic. AI is reserved for content generation only (question variants and lesson plan drafts), and every AI-generated item requires teacher approval before publication.
Phases 7 to 10: Product, Architecture, Commercial, Executive
The remaining phases produced a product requirements baseline with user stories, a gamification framework (XP, levels, streaks, badges with anti-pattern analysis and teacher override controls), a full technical architecture (15+ PostgreSQL tables, 13 n8n workflows, Supabase auth with RBAC), a cost model, pricing strategy, revenue projections, risk register, and an executive summary with a recommended roadmap.
The Cost Model
We modelled three build scenarios. The lean MVP comes in at £35,000 to £55,000 for a 14 to 18 week build. Monthly operating costs at launch (0 to 200 students) sit between £75 and £200, covering Supabase, S3, n8n hosting, email, Stripe fees, and AI API usage.
The economics are strong. At 50 paying students (£35/month average), monthly revenue is roughly £1,750 against £100 in costs: a 94% margin. At 200 students, it's £7,000 against £175. Break-even on the lean MVP is achievable within 8 to 10 months at modest student volume.
These are estimates, not actuals. The build hasn't started. But the model gives the client a realistic picture of the investment required and the economics of the business at different scales.
What This Process Produces
The output of a structured discovery process isn't a slide deck. It's a specification that a developer can build from. The client gets:
- A clear understanding of the competitive landscape and their positioning within it.
- A technical architecture they can evaluate, challenge, and approve before any code is written.
- A cost model that separates build cost from operating cost and shows margins at different scales.
- A prioritised roadmap that distinguishes MVP features from Phase 2 and Phase 3 additions.
- An assumptions register and unknowns register that make the remaining decisions explicit.
This is de-risking. The client can decide whether to proceed with the build based on evidence, not on a sales pitch. If they decide to build, the developer has a specification that answers the structural questions before they arise.
This discovery methodology is part of what makes a custom command suite different from a typical agency build. We don't start coding on day one. We start by understanding the problem thoroughly enough to build the right system, not just a system.
Planning a Build?
Start with a free workflow audit. We'll assess whether your project needs a full discovery process and what that would look like.
Book Your Free Workflow Audit →