Last Tuesday, my youngest came home from school and casually mentioned that “the computer marked my English essay today.” He said it like it was completely normal, the same way he’d tell me what he had for lunch. I nearly choked on my tea. Not because I was horrified, but because nobody had told me. No letter home, no parents’ evening mention, no email. The school had quietly rolled out an AI marking tool, and my nine-year-old knew about it before I did.
I spent that evening doing what any self-respecting tech dad would do. I went down a rabbit hole. And what I found was genuinely surprising. Not alarming, necessarily, but eye-opening. AI tools are being adopted across UK and US schools at a pace that most parents simply aren’t aware of. Some of these tools are brilliant. Some raise legitimate questions. And almost none of them are being communicated to families in a way that feels transparent.
So I’ve put together the guide I wish someone had handed me at the school gates. Consider this your practical, no-panic briefing on what’s actually happening with AI in schools in 2026, what data is being collected about your kids, and the questions you should be asking at the next parents’ evening.
The AI tools schools are actually using right now
Let’s start with the landscape, because it’s broader than most parents realise. AI in education isn’t some future concept being trialled in a handful of Silicon Valley schools. It’s here, now, in mainstream state schools across the UK and public schools across the US.
The most common category is AI-assisted marking and feedback. Tools like Century Tech, Sparx Maths, and newer platforms like Graide are being used to mark everything from maths homework to short-answer English responses. The pitch is straightforward: teachers are drowning in workload, and AI can handle routine marking faster, freeing them up to actually teach. In many cases, the AI doesn’t just score the work. It provides personalised feedback, flagging specific areas where a student is struggling.
Then there are adaptive learning platforms. These use AI to adjust the difficulty and content of lessons in real time based on how a student performs. If your child breezes through fractions but stumbles on percentages, the platform serves up more percentage practice automatically. Oak National Academy in the UK has been integrating AI features, and platforms like Khan Academy’s Khanmigo (powered by GPT-4o) are gaining traction in US classrooms.
Classroom chatbots are the newest entrant. Some schools are experimenting with AI assistants that students can query during lessons, essentially a teaching assistant that never gets tired and is available to every child simultaneously. These range from simple Q&A bots to more sophisticated tools that can walk a student through a problem step by step.
Finally, there’s the less visible category: administrative AI. Schools are using AI to predict attendance issues, flag safeguarding concerns, identify students at risk of falling behind, and even help with timetabling. This stuff rarely makes the newsletter, but it’s arguably where the biggest data questions sit.
What data is being collected (and who has it)
This is where most parents’ ears perk up, and rightly so. When your child interacts with an AI learning platform, data is being generated. Sometimes a lot of it.
At a basic level, these platforms collect performance data: what questions your child got right, what they got wrong, how long they spent on each task, and how their performance trends over time. That’s fairly standard educational data, and most parents would consider it reasonable.
But some platforms go further. They may collect behavioural data like how often a child pauses, whether they change answers, what time of day they’re most engaged, and how they interact with the interface. More advanced tools use natural language processing to analyse written responses, which means the AI is reading and interpreting your child’s writing in ways that go beyond simple spelling and grammar checks.
In the UK, schools must comply with the UK GDPR and the Data Protection Act 2018. Any AI tool processing children’s data should have a Data Protection Impact Assessment (DPIA) completed, and the school should be able to tell you what data is collected, where it’s stored, and who can access it. The Information Commissioner’s Office (ICO) has published guidance specifically on AI in education, and it’s worth a read if you want the formal picture.
In the US, the landscape is patchier. FERPA (Family Educational Rights and Privacy Act) and COPPA (Children’s Online Privacy Protection Act) provide baseline protections, but enforcement varies by state. California tends to lead with stricter rules, while other states lag behind. The key question is always: is the data staying within the school’s ecosystem, or is it being shared with (or retained by) the third-party platform?
Here’s a practical table comparing some of the most common AI education tools and what we know about their data practices:
| Tool | Type | Primary Market | Data Collected | Data Stored Where | Parent Dashboard |
|---|---|---|---|---|---|
| Century Tech | Adaptive learning | UK | Performance, time on task, learning gaps | UK servers | Yes |
| Sparx Maths | Adaptive maths | UK | Performance, completion rates, time data | UK servers | Limited |
| Khanmigo (Khan Academy) | AI tutor/chatbot | US (growing UK) | Conversations, performance, engagement | US servers | Yes |
| Graide | AI marking | UK | Written responses, grades, feedback | UK servers | No (teacher-facing) |
| MagicSchool AI | Teacher toolkit | US | Varies (primarily teacher-side) | US servers | No |
| Cognii | AI tutoring | US | Written responses, dialogue data | US servers | No |
The honest truth is that many schools have adopted these tools faster than their privacy documentation has kept up. That’s not necessarily malicious. It’s a resourcing problem. But it does mean parents need to be proactive.
The questions you should be asking teachers
I’m not suggesting you storm into school waving a GDPR printout. But there are some perfectly reasonable, non-confrontational questions that every parent should feel comfortable asking. I’ve road-tested these at my own kids’ school, and the responses were actually quite reassuring once I got past the initial “nobody told me” irritation.
1. “What AI tools is the school currently using, and for what purposes?” Simple, direct, and hard to dodge. You might be surprised how many tools are in play. Ask for a list.
2. “Has a Data Protection Impact Assessment been completed for each tool?” In the UK, this is a legal requirement for high-risk processing, which AI tools involving children’s data almost certainly qualifies as. In the US, ask whether the tool complies with FERPA and COPPA.
3. “Where is my child’s data stored, and is it shared with third parties?” This is the big one. Some platforms use anonymised data to train their models, which means your child’s work could be contributing to the AI’s development. Whether you’re comfortable with that is a personal call, but you should at least know about it.
4. “Can I see what data has been collected on my child?” Under UK GDPR, you have a right to a Subject Access Request. In the US, FERPA gives parents the right to inspect education records. Exercise these rights if you’re curious.
5. “How are teachers being trained to use these tools?” This matters more than people think. An AI marking tool is only as useful as the teacher who reviews its output. You want to know that teachers are checking the AI’s work, not blindly accepting it.
6. “Is there an opt-out option?” Some schools offer this, some don’t. It’s worth asking, even if you decide not to opt out. The existence of an opt-out tells you something about how thoughtfully the school has considered parental choice.
Staying informed without losing sleep
Here’s my honest take as a dad who works in tech and has three kids in the system: most of what’s happening with AI in schools is positive, or at the very least, well-intentioned. Teachers are stretched impossibly thin. If an AI tool can mark 30 maths homeworks in five minutes and give each child personalised feedback, that’s genuinely a good thing. My kids’ teachers are brilliant, but they’re human, and there are only so many hours in a day.
The risks are real but manageable. The biggest concern isn’t some dystopian scenario where AI replaces teachers. It’s the more mundane risk of data creep, where small amounts of data from multiple platforms accumulate into a surprisingly detailed profile of your child, their strengths, weaknesses, behaviours, and learning patterns. That profile has value, and not everyone who might want access to it has your child’s best interests at heart.
The other concern worth flagging is algorithmic bias. AI models are trained on data, and if that training data isn’t representative, the AI’s assessments can be skewed. There have been documented cases of AI tools marking students from certain backgrounds more harshly, or adaptive platforms funnelling certain demographics towards less challenging content. Schools should be asking vendors about bias testing, and parents should be asking schools.
But I want to be clear: the answer here is not to panic or to demand schools abandon AI altogether. The answer is to stay informed, ask good questions, and push for transparency. Most schools will welcome engaged parents. They’re navigating this for the first time too.
Hype Cycle check
Let’s run this through the reality filter:
LIKELY TO LAST: AI-assisted marking and adaptive learning platforms. The workload crisis in teaching is real, and these tools deliver measurable time savings. Expect them to become standard infrastructure within three to five years.
WATCH CLOSELY: AI chatbots as teaching assistants. The technology works, but the pedagogical evidence is still catching up. Early results from Khanmigo trials are promising, but we need more data on long-term learning outcomes, not just engagement metrics.
VAPOURWARE RISK: Fully autonomous AI tutors that replace human teaching. Despite what some edtech startups claim, the “AI teacher” that handles everything from lesson planning to pastoral care is nowhere near ready. Teaching is fundamentally a human relationship, and AI is a tool within that relationship, not a replacement for it.
What this means for CES 2027
Education technology has been a growing presence at CES, and I’d expect AI in education to get significant floor space at CES 2027. We’ll likely see new hardware designed specifically for AI-assisted learning, think tablets with built-in AI tutoring features and classroom displays with real-time adaptive content.
The bigger story will be around interoperability. Right now, schools are juggling multiple AI platforms that don’t talk to each other. Any company that cracks the unified AI education dashboard, giving teachers and parents a single view across all tools, will have a massive advantage. I’ll be watching for announcements from the major players like Google for Education and Microsoft Education, as well as ambitious newcomers.
Privacy will also be a headline theme. With the EU AI Act now in full effect and the UK’s own AI regulatory framework maturing, CES 2027 will be the first major tech show where “compliant by design” is a genuine selling point rather than an afterthought.
Recommended on Amazon
These are affiliate links — if you buy through them, Tech Dads Life earns a small commission at no extra cost to you.
What to watch
1. The UK Department for Education’s AI framework. A comprehensive set of guidelines for AI use in schools is expected to be finalised by late 2026. This will set the standard for what tools schools can adopt and how they must handle data.
2. Khanmigo’s UK expansion. Khan Academy has been making noise about a broader UK rollout. If it lands in state schools at scale, it could be the most significant AI education deployment the UK has seen.
3. State-level AI education laws in the US. At least a dozen US states are expected to introduce or update legislation around AI in schools by the end of 2026. Watch California and New York for the pace-setters.
4. Parent coalition movements. Groups like Defend Digital Me in the UK and the Parent Coalition for Student Privacy in the US are increasingly influential. Their advocacy is shaping policy, and they’re excellent resources for staying informed.
If this kind of practical, no-nonsense tech parenting coverage is useful to you, I send out a weekly newsletter covering the stuff that actually matters for families navigating technology. No hype, no jargon, just honest analysis from a dad who’s living it. Sign up at https://techdadslife.beehiiv.com/ and I’ll see you in your inbox.
