The 25% rule. 100% their voice.
The first apprenticeship platform built around a multi-agent AI team that hints before it answers — preserving the learner's authentic voice from baseline to End-Point Assessment.
Twenty-five per cent. Not a guideline — a guardrail.
Our Human Learning Engine caps AI synthesis at a quarter of any submission. The remaining seventy-five per cent is the learner — captured at baseline, signed at every keystroke, and sealed at submission.
Baseline
Voiceprint captured in week one.
Live cap
AI assistance metered as the learner writes.
Provenance
C2PA-style signature on every submission.
Capped at the engine level — not a setting
Five agents. One philosophy.
Each agent is a discrete edge function with a distinct mandate. Together they form the Human Learning Engine.
Reginald
The Socratic Mentor
Hints before answers. Reginald asks the question that unlocks the learner's next thought — and never writes the sentence for them.
Leonard
Evidence Architect
Maps every captured moment to the Level 7 Senior Leader KSB framework. Builds the EPA portfolio while you do the work.
Margaret
Authenticity Auditor
C2PA-aligned signatures on every submission. Margaret compares the learner's words to the baseline voice captured at enrolment and flags drift before an assessor ever sees it.
Dorothy
Standards Cartographer
Holds the Level 7 Senior Leader KSB framework in working memory. Dorothy keeps the cohort's progress map honest — no orphan evidence, no double-counted hours.
Lenny
Critical Friend
Refuses surface-level reflection. Lenny pushes the learner one layer deeper — the question they didn't want to be asked, asked kindly.
Gateway-ready, calculated continuously.
A composite score derived from KSB coverage, evidence density, voice authenticity, and assessor sign-offs. Hover any component to reveal its weighting.
Live-calculated each time a learner submits new evidence. Gateway recommended at 80+.
Signed at the keystroke. Sealed at submission.
Every interaction is cryptographically signed using a C2PA-aligned scheme. Dorothy compares against the baseline voice waveform captured at enrolment — anomalies are surfaced, not hidden.
Captured during the enrolment task. Becomes the reference signal for every future submission.
{
"claim_generator": "cohort-360/1.0",
"signature_alg": "ed25519",
"voiceprint_match": 0.94,
"ai_assist_density": 0.18,
"ksb_tags": ["K3","S2","B2"],
"sealed_at": "2026-04-12T09:14:22Z"
}Visible to assessors. Tamper-evident. Portable to the EPAO.
Outcomes the platform is designed to deliver.
Our pilot with Canterbury Christ Church University begins September 2026. These are the design targets the engine is built around — we'll publish measured results as the cohort reports.
Off-the-job hours captured ambiently from the work already happening.
Design target — pilot opens 2026
Voice-drift and KSB gaps surfaced weeks before gateway, not at it.
Design target — pilot opens 2026
Provenance manifests exportable on demand for every submission.
Design target — pilot opens 2026
Ready to see it in motion?
Twenty minutes. A live walkthrough tailored to your cohort. We bring the dashboard, you bring the questions.
