Forgot your password?
typodupeerror
AI Education

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."
This discussion has been archived. No new comments can be posted.

The Risks of AI in Schools Outweigh the Benefits, Report Says

Comments Filter:
  • Inevitable (Score:3, Insightful)

    by Anonymous Coward on Sunday January 25, 2026 @09:01AM (#65947800)

    As long as schools are narrowly focused on just results, I believe this will become the norm for human thinking. When these children graduate, they'll continue to have access to the tools that allow them to get the results they need for life. Their way of operating in the world will become ubiquitous and, sadly, most people won't care. It all seems so dystopian to me.

    • Re: (Score:3, Interesting)

      by test321 ( 8891681 )

      It's inevitable that it will be used in adult life, but school still teaches subjects that have been replaced by digital. We have had calculators for over 50 years, but we still teach hand and mental calculation. Same with handwriting, pencil/painting art, foreign languages.
      We must ensure that AI can be used in some classes, but not all. I see that schools still rely on paper and pencil, and that phones are being banned from classroom, so many countries are on the right trajectory.

    • by cowdung ( 702933 )

      I've always said that one has to earn the right to automation.

      Once you understand the material well, then by all means automate. But don't make the tools a crutch in life.

  • Idiocracy (Score:5, Interesting)

    by LazLong ( 757 ) on Sunday January 25, 2026 @09:50AM (#65947822)

    Maybe AI is how Idiocracy truly comes about?

    • by Anonymous Coward
      Probably just sleep mode.
    • by gweihir ( 88907 )

      That would be very unsurprising.

    • Maybe AI is how Idiocracy truly comes about?

      I think what we need is (a conceptual model of) two modes of personal knowledge.

      One mode is your personal area of expertise. You could be a web app programmer, or biomedical researcher, or welder, or plumber, or whatever. You have all the knowledge you need to participate in your field without help.

      The other side is "everything else". You use AI to get you by the tasks you need to accomplish, because it's too difficult or onerous to go and read the documentation for everything.

      For example, just yesterday I

      • I agree mostly. The problem with the second mode is how LLMs present themselves both through how they're marketed and how they present responses. They are wildly incorrect at times and they will happily state that incorrect response exactly the same as any other response. Without domain knowledge you won't catch it. As long as what you're using the LLM for is inconsequential then no worries but they're being marketed as perfect for anything and that's a bad assumption to go in with.
    • Maybe AI is how Idiocracy truly comes about?

      I'm pretty sure that was social medias mindfuck-sucking job.

      As if it really took much more than that to start global IQ on the decline..

    • Holy shit, you're right. It all makes sense now. It will only take a few generations and we're there.

      Step 1 - Invent LLMs
      Step 2 - Stop teaching how to think in schools
      Step 3 - Nobody knows how to maintain datacenters needed for LLMs
      Step 4 - Everything slowly decays around us and we don't know how to fix things
      Step 5 - Irrigation with salt water begins

  • Wanna bet that most schools will do it wrong and just use it to do more crappy teaching cheaper?

    • Yep. I'm wondering how something that's 3 years old already has "experts". Who are these experts? How did they become the expert? I have an idea! I'm an expert. Pay me money and I'll tell you how to do things. When there is a demand for something, someone will show up to supply it. Funny how that works :-)
  • I already get lost in my own back yard without Google Maps. A result of my own failure to learn how to navigate and to completely depend on an app. I've very curious about what handicaps constant AI use will create in future generations.
  • A lot of scut work can definitely be done by AI, but you should not consider more trustworthy than an Intern. It should not be trusted with:

    1. Any legal work not reviewed thoroughly by a real human lawyer. This includes criminal and civil.
    2. Any child care
    3. anyone's money
    4. human health
    5. the life of any animal you care about.
  • If you want to protect children from AI, you're going to have to educate them about how AI fails. Not accidentally, by letting them use it and experience those failures by themselves because they might not run into them or might not understand them, but with directed age-appropriate education about how it works. Kids need to understand that getting into the van with the candy also has other consequences.

  • They were at the point where pen and paper was just being complemented with computers and the early internet, but they need to teach gen beta (that's what we are up to now) the ways of the pen, paper and book and not just some chatbot talking out their clanker ass. When moving on to computers they should be on proper peer reviewed by expert humans content and not what silicon valley dreams up. We had the same issue with people making things up on Wikipedia and Tumblr, now we just have another source of idio
  • by joshuark ( 6549270 ) on Sunday January 25, 2026 @12:34PM (#65948018)

    Now for something entirely different (allusion to Monty Python)...now NVidia, Anthropic, OpenAI, etc. will now fund a study to release a report about how AI is a great asset in schools, etc. The benefits of AI/ML are reminiscent of the studies about coffee in the 1980s and 1990s. First bad for you, drink de-caf. Then de-caf is bad for you drink coffee, and the band plays on...

    --JoshK.

  • New report says Coke tastes better than Pepsi.

  • by El Fantasmo ( 1057616 ) on Sunday January 25, 2026 @02:44PM (#65948252)

    From what I've read across nearly all end user AI stories are 2 main things.
    1. AI is meant to replace a certain type of previously human only task
    2. AI supplements a knowledge worker's efficiency

    To focus on point 2, you can't be truly efficient with AI if you are not knowledgeable enough in the source material to cut through poor or incomplete AI output or input proper iterative prompts. Without that, your likely spending more time cleaning up after AI and doing actual research/data gathering when you probably could have done it yourself from the outset.

  • China is doing the opposite, its mandating AI in class. Their approach looks balanced. As I keep saying about AI: "The right tool for the right job". From Gemini:

    1. The National AI Curriculum (K-12)
    Starting in late 2025, China mandated AI literacy for all students from first grade through university. The curriculum is tiered based on cognitive development:
    - Primary School: Focuses on "AI Literacy." Students are introduced to basic concepts like voice recognition and image classification through interactive
    • by cowdung ( 702933 )

      you can't effectively monitor if something was generated by AI.

      also, I'm sure Chinese education is very exam heavy.. so less succeptible to AI cheating. But the sad part is then there are less project based evaluations.

  • Obviously there is a AI bubble that will burst and the state will then be able to put less money into education of the next generation.

  • Another way of looking at AI is that some students will use AI chatbots as a tutor to stimulate their own thinking and learning. Smart students use all the resources they have to obtain new ideas, generate questions, and then augment their personal understanding. These resources could be teachers, textbooks, classmates, etc. Now AI chatbots can be added to the set of resources.

    Then there are students that will copy and paste (or slightly modify) AI output for assignments and projects, bypassing personal

"Ask not what A Group of Employees can do for you. But ask what can All Employees do for A Group of Employees." -- Mike Dennison

Working...