The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33
This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits:
After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...
Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."
AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...
AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."
The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."
"We find that AI has the potential to benefit or hinder students, depending on how it is used."
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...
Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."
AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...
AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."
The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."
"We find that AI has the potential to benefit or hinder students, depending on how it is used."
Inevitable (Score:3, Insightful)
As long as schools are narrowly focused on just results, I believe this will become the norm for human thinking. When these children graduate, they'll continue to have access to the tools that allow them to get the results they need for life. Their way of operating in the world will become ubiquitous and, sadly, most people won't care. It all seems so dystopian to me.
Re: (Score:3, Interesting)
It's inevitable that it will be used in adult life, but school still teaches subjects that have been replaced by digital. We have had calculators for over 50 years, but we still teach hand and mental calculation. Same with handwriting, pencil/painting art, foreign languages.
We must ensure that AI can be used in some classes, but not all. I see that schools still rely on paper and pencil, and that phones are being banned from classroom, so many countries are on the right trajectory.
Re: (Score:2)
I've always said that one has to earn the right to automation.
Once you understand the material well, then by all means automate. But don't make the tools a crutch in life.
Idiocracy (Score:5, Interesting)
Maybe AI is how Idiocracy truly comes about?
Re: (Score:1)
Re: (Score:3)
That would be very unsurprising.
Two modes (Score:2)
Maybe AI is how Idiocracy truly comes about?
I think what we need is (a conceptual model of) two modes of personal knowledge.
One mode is your personal area of expertise. You could be a web app programmer, or biomedical researcher, or welder, or plumber, or whatever. You have all the knowledge you need to participate in your field without help.
The other side is "everything else". You use AI to get you by the tasks you need to accomplish, because it's too difficult or onerous to go and read the documentation for everything.
For example, just yesterday I
Re: Two modes (Score:2)
Re: (Score:2)
Maybe AI is how Idiocracy truly comes about?
I'm pretty sure that was social medias mindfuck-sucking job.
As if it really took much more than that to start global IQ on the decline..
Re: (Score:2)
Holy shit, you're right. It all makes sense now. It will only take a few generations and we're there.
Step 1 - Invent LLMs
Step 2 - Stop teaching how to think in schools
Step 3 - Nobody knows how to maintain datacenters needed for LLMs
Step 4 - Everything slowly decays around us and we don't know how to fix things
Step 5 - Irrigation with salt water begins
Such a surprise (Score:2, Troll)
Wanna bet that most schools will do it wrong and just use it to do more crappy teaching cheaper?
Re: (Score:2)
Re: (Score:3)
Re: (Score:3, Insightful)
It is not about large companies or not. It is about students versus educators and how to deal with omnipresent tools for cheating that are hard to detect and reliable with the solutions for homework-level exercises. Either you learn the game or you lose, the decision if you want to play is already made by the students.
We could discuss about how much we dislike it, but that's pointless so we discuss the fact that it is there and that it is used and that we cannot reliably prevent it, and talk about how to be
Re: AI will come to education (Score:2)
AC is fundamentally correct here.
One main thing to realize is that these things are real tools available across society *today*. They are not going away. There is no job where they will not be available when these students graduate. They are not cheats. Treating them as such is doing the students a disservice. Yes it is hard, because we don't yet know how very well (and what) to teach in the presence of these tools, but really *that* is the thing to solve for here, not how to 'catch', or even discourage use
Re: AI will come to education (Score:2)
How do you use chat gpt for a real essay where you need to cite sources to all your work? My kid is in a biological science and I asked him about using AI in the field. He said he can't really, because AI doesn't ever show the work and that's the point of a publication. And the studies are to find out things that aren't known yet, but AI can only work with what is known. It may be able to interpolate between known things but it can never go beyond known things.
Re: (Score:2)
Sadly what it means is going backwards from more project based evaluations to more exam based evaluations.
This isn't great. But I don't know how else to solve the problem.
AI dependency (Score:1)
AI/LLM is the equivelent of an Intern (Score:2)
A lot of scut work can definitely be done by AI, but you should not consider more trustworthy than an Intern. It should not be trusted with:
Kids should be learning about how it's bad (Score:5, Insightful)
If you want to protect children from AI, you're going to have to educate them about how AI fails. Not accidentally, by letting them use it and experience those failures by themselves because they might not run into them or might not understand them, but with directed age-appropriate education about how it works. Kids need to understand that getting into the van with the candy also has other consequences.
Millenial teachers should know better (Score:2)
Now for... (Score:3)
Now for something entirely different (allusion to Monty Python)...now NVidia, Anthropic, OpenAI, etc. will now fund a study to release a report about how AI is a great asset in schools, etc. The benefits of AI/ML are reminiscent of the studies about coffee in the 1980s and 1990s. First bad for you, drink de-caf. Then de-caf is bad for you drink coffee, and the band plays on...
--JoshK.
report says (Score:2)
New report says Coke tastes better than Pepsi.
AI + Knowledge (Score:3)
From what I've read across nearly all end user AI stories are 2 main things.
1. AI is meant to replace a certain type of previously human only task
2. AI supplements a knowledge worker's efficiency
To focus on point 2, you can't be truly efficient with AI if you are not knowledgeable enough in the source material to cut through poor or incomplete AI output or input proper iterative prompts. Without that, your likely spending more time cleaning up after AI and doing actual research/data gathering when you probably could have done it yourself from the outset.
Meanwhile, far away... (Score:1)
1. The National AI Curriculum (K-12)
Starting in late 2025, China mandated AI literacy for all students from first grade through university. The curriculum is tiered based on cognitive development:
- Primary School: Focuses on "AI Literacy." Students are introduced to basic concepts like voice recognition and image classification through interactive
Re: (Score:2)
you can't effectively monitor if something was generated by AI.
also, I'm sure Chinese education is very exam heavy.. so less succeptible to AI cheating. But the sad part is then there are less project based evaluations.
AI financial bubble (Score:2)
Obviously there is a AI bubble that will burst and the state will then be able to put less money into education of the next generation.
Another viewpoint (Score:2)
Another way of looking at AI is that some students will use AI chatbots as a tutor to stimulate their own thinking and learning. Smart students use all the resources they have to obtain new ideas, generate questions, and then augment their personal understanding. These resources could be teachers, textbooks, classmates, etc. Now AI chatbots can be added to the set of resources.
Then there are students that will copy and paste (or slightly modify) AI output for assignments and projects, bypassing personal
Re: Another viewpoint (Score:2)
That's like saying sitting in front of a TV can promote thinking.