The Human Standard: Why We Need Teachers, Not Just Algorithms, in the Classroom

We are being sold a powerful dream: a world where every child has a private, 1-on-1 tutor—a personalized AI that knows their every strength and weakness. It sounds like the ultimate educational equalizer, promising to unlock individual potential like never before. Indeed, AI offers incredible tools for practice, training, and adaptive reinforcement—areas where it can excel in supporting learning.

But our focus today is on the core act of teaching and foundational instruction, where human connection, verifiable truth, and shared context are paramount. There is a "ghost in the code" that we aren't talking about when we consider AI in this central pedagogical role.

As we rush to replace the "crowded classroom" with the "personalized screen" for primary instruction, we are trading a proven, human-led community for a black-box technology that is inconsistent, unmeasurable, and dangerously prone to hallucinations.

It's time to argue for "The Human Standard": the idea that our children are better off in a vibrant classroom with one amazing teacher for fundamental learning, than they are alone with a machine that can't tell the difference between a fact and a fabrication.

1. The Hallucination Hazard: When the Tutor is a Confident Liar

The greatest danger of the AI tutor isn't merely that it makes mistakes; it's that it makes mistakes with the unwavering confidence of an expert. When a human teacher doesn't know an answer, they say, "I'll look that up." When an AI doesn't know, it often "hallucinates"—it generates a plausible-sounding lie, delivered with authority.

In a classroom, if a teacher makes a mistake, thirty pairs of eyes are watching, and a vetted curriculum is there to catch it. But in the vacuum of 1-on-1 AI tutoring for core instruction, there is no "peer review." If the machine tells a child that the moon is made of cheese or that a historical event happened in the wrong century, that child has no reason to doubt it. We are effectively handing our children a "source of truth" that has no inherent relationship with the truth. This is how a "knowledge virus" spreads undetected.

Consider these scenarios, which are not hypothetical:

  • The Fictitious History: Researchers have found AI tools fabricating entire historical events. To a 12-year-old, it sounds like a fun fact; to an educator, it's a virus in the knowledge base.
  • The Phantom Citations: AI often invents books and academic papers to support its claims. One major newspaper recently discovered a "Summer Reading List" generated by AI where 10 out of 15 books did not exist, despite having realistic summaries.
  • The Manufactured Biography: An AI might confidently generate details about a historical figure's life, adding fictional anecdotes. For instance, a student researching Abraham Lincoln might encounter a detailed story about his secret hobby as a beekeeper—without external verification, this fabrication becomes "fact."

Critical to understand: Even the largest language models possess an inherent disability to maintain absolute consistency with hallucination-proof delivery of foundational concepts. This fundamental flaw is too often deceptively presented by AI startups as a "customized experience." The very instability that makes it unreliable for primary instruction is rebranded as adaptive personalization.

2. The Standardization Crisis: The "Black Box" Problem

Education isn't just about absorbing data; it's about establishing a shared foundation. It is a public trust. We have standards (like Common Core or IB) to ensure that every child, regardless of their background, receives a verified "baseline" of truth and knowledge. This is crucial for pedagogical integrity in the core curriculum.

✓ The Human Teacher:

Follows a standardized, transparent, and legally responsible curriculum. We know what they are teaching, and we can measure their performance against a collective standard.

✗ The AI Tutor:

Is a "black box." Every child's AI might explain a concept differently, omit crucial context, or introduce personal biases hidden in its training data.

How can we be a cohesive society if every child is being raised on a "personalized" version of reality that hasn't been vetted by a human board of ethics or education? We cannot standardize "responsibility" in a machine that changes its logic every time it's updated.

This leads to the problem of Unmeasurable Performance. How do you truly grade an AI's instructional impact? Due to AI's dynamic, session-specific content generation, it cannot be reliably monitored for quality. If the AI simplifies a concept so much that the student gets an "A" but hasn't actually mastered the underlying logic, the system looks like it's working. This is "simulated success." A human teacher can see the "glassy-eyed" look of a student who is mimicking an answer versus one who truly understands.

The Accountability Void: A human teacher who teaches something incorrect operates within a clear framework of oversight and remediation. But when an AI "hallucinates" a biased perspective, liability evaporates. The developer claims the model is only a tool, the school claims it did not author the content, and the AI itself cannot be held responsible. This black-box diffusion of blame makes meaningful accountability nearly impossible.

3. Support, Improvement, and Shared Context

This accountability contrast becomes even sharper when considering how educational systems support and improve human teachers. When a teacher struggles, schools can intervene constructively: mentoring can be offered, training adjusted, observations shared, and contextual knowledge about a student—learning differences, personal challenges, prior progress—can be responsibly communicated across teachers, counsellors, and administrators.

This shared, human-centered understanding allows educators to refine their approach and continuously improve outcomes for the student. With AI models, such coordination is largely impossible. Models do not meaningfully "learn" from institutional feedback, cannot participate in reflective improvement, and cannot safely share student-specific information across systems without colliding with data privacy constraints.

4. The Power of the "Crowd": Why Large Classrooms Are a Feature, Not a Bug

There is a myth that "large classrooms" (15-30 students) are a bug in the system. They aren't. They are a feature—a social laboratory essential for human development. We have been told that "large classes" are a failure of the system, but we must reframe them as a vital safety mechanism and a crucible for growth.

  • Distributed Intelligence: In a room of 30, the "group mind" acts as a natural filter for errors. Students learn from each other's questions, mistakes, and insights, creating a more robust learning environment.
  • The Social Laboratory: The "friction" of other people—learning to wait your turn, hearing a peer's perspective, navigating different personalities—is precisely where social-emotional skills are built. A 1-on-1 AI tutor is a sterile environment. It's "frictionless," which means it's also "growth-less."
  • The Human Standard: A teacher doesn't just deliver content; they deliver context and character. They humanize data, providing a moral and social framework that a machine can only ever simulate. They represent a standard of adulthood that inspires and guides beyond mere information transfer.

An amazing teacher can move an entire room with a single story, sparking curiosity and fostering a love for learning that extends far beyond the curriculum. A machine, no matter how advanced, lacks a "soul."

5. What To Do? The Path Forward for Education Leaders

Still, if education leaders choose to integrate AI into core instruction regardless, there is one critical recommendation: stick with the major infrastructure providers only—such as Google and OpenAI.

Why? Only these companies have the resources required to be the first to achieve a reliable and safe model for educational use:

  • Scale of operations – Billions of users and interactions enabling real-time identification and correction of issues
  • Access to data – Massive datasets to train more accurate and reliable models
  • Ability to control model behavior – Safety and ethics teams at a scale that a private startup simply cannot recruit
  • Public accountability – Publicly traded companies subject to public and regulatory scrutiny
Private startups, however promising they may be, lack the ability to guarantee the consistency, accuracy, and reliability required for foundational instruction. The bet on our children's future should be placed on players who can stand behind their promise.

The Hidden Costs: A Comparison

Here's a look at the trade-offs we're making when AI takes the lead in teaching:

Feature The Teacher (Large Class) The AI Tutor (1-on-1)
Accuracy Vetted, accountable, peer-reviewed knowledge Prone to hallucinations and confident errors
Consistency Follows a standardized, stable curriculum Inconsistent; changes based on updates
Measurement Performance is transparent and holistic "Black box" leading to simulated success
Social Growth High social friction builds essential skills Zero friction; creates social isolation
Accountability Clear lines of responsibility Accountability void; diffused blame
Inspiration Authentically human and aspirational Simulated; lacks genuine connection

Note: The perspectives and concerns raised in this article reflect the state of AI in education as of December 2025. It is important to acknowledge that AI models, particularly the leading ones, are on a trajectory toward increased maturity and stability in core teaching functions. The coming years are poised to witness significant advancements that will undoubtedly redefine the educational landscape.

The promise of personalized AI tutoring is alluring, but we must look beyond the glossy marketing. While AI has a powerful role to play in supporting learning through practice and reinforcement, "The Human Standard" demands that we prioritize the verified, accountable, and deeply human elements of core instruction.

Our children deserve nothing less than the truth, and a truly human education.

Let's keep the conversation going 💬

I'd love to hear your take on this—whether you see things differently or if this aligns with your own experience. If you're reflecting on what to do now with these ideas or wondering how they might look in your specific situation, let's talk about it.

I'm always happy to trade thoughts or brainstorm how this applies to your world.

✉️ Drop me a note: [email protected]