I know this is unpopular as hell, but I believe that LLMs have potential to do more good than bad for learning, a long as you don’t use it for critical things. So no health related questions, or questions that is totally unacceptable to have wrong.
The ability to learn about most subjects in a really short time from a “private tutor”, makes it an effective, but flawed tool.
Let’s say that it gets historical facts wrong 10% of the time, is the world more well off if people learn a lot more, but it has some errors here and there? Most people don’t seem to know almost no history at all.
Currently people know very little about critical topics that is important to a society. This ignorance is politically and societally very damaging, maybe a lot more than the source being 10% wrong. If you ask it about social issues, there is a more empathetic answers and views than in the main political discourse. “Criminals are criminals for societal reasons”, “Human rights are important” etc.
Yes, I know manipulation of truth can be done, so it has to be neutral, which some LLMs probably aren’t or will not be.
Am I totally crazy for thinking this?
It’s not only that 10% is wrong, it’s knowing which 10% is wrong, which is more important than it seems at first glance. I feel strongly that AI is contributing to people’s inability to really perceive reality. If you direct all your learning through a machine that lies 10% of the time, soon enough your entire world-view will be on shaky ground. How are you going to have a debate with someone when you don’t know which parts of your knowledge are true? Will you automatically concede your knowledge to others, who may be more convincing and less careful about repeating what they’ve learned through AI?
I think all that AI really needs to do is translate natural language requests (“What factors led to WW2?”) into normal destinations for further learning. Letting AI try to summarize those destinations seems like a bad idea (at least with where the technology is right now)
AI is a great tool for pissing off people who hate AI. That’s reason enough for me.
AI is a good learning tool only for Wimp Lo, who is taught wrong on purpose, as a joke.
If people bring critical thinking skills to the table, AI is great! But “if” is doing some heavy lifting.
Don’t know about other countries and generations, but as an American GenX dude, I only had two teachers through the end of high school that made the effort to teach critical thinking. I’d go as far as to say that lack is driving ALL of our political and societal issues.
You would be bugfuck crazy to agree with me on everything, but FFS, most people can’t argue a point with facts. For that matter they can’t tell the difference between a fact and an opinion, and that’s where AI stands to hurt us.
OTOH, AI can’t be worse than social media. For all the bitching about AI around here, LLMs get facts correct far above what FaceBook and Twitter are feeding us. I doubt the average lemming has really delved into those sites lately, me included! But it’s hard to overstate how poisonous they are to the average citizen.
Look at Grok. LOL, that thing spits facts against it’s own creator, and he’s in total control!
I don’t think this is an unpopular opinion, and that’s what scares me.
AI is wrong far more often than just 10% of the time, and it’s wrong in ways that can be very easy to miss even if you are knowledgable about a subject, because the whole design of it is to make textually coherent responses. Responses that are similar to whatever metric it has for high quality.
And the underlying mechanics of LLM design are such that it literally cannot have any metric for true vs false, only metrics for how “related” the chunks of sentences/words/etc are.
There is benefit of being able to search and ask questions using natural language, like when you don’t know the actual terms you’re lookibg for, or when it’s the type of thing where search terms might not capture the full nuance (try searching google for info on moving things from on-premise Exchange to Exchange Online with other caveats to consider. Even with the right terms, you get results for entirely different contexts).
The benefit of it generating responses rather than going “these pages seem related” is where I think everyone overlooks the danger.
I’ve used Microsoft Copilot for assistance with removing the last server from my workplace’s hybrid Exchange deployment while we still had objects being synced from AD to Azure. Everything I asked has official Microsoft Documentation in the training data. I set up my system/starting prompts very carefully. I was just looking for a way to ask questions about this shit in natural language rather than having to play find the needle in the haystack in the Microsoft docs. Especially as I didn’t know certain terms or where to begin looking into some of the issues we encountered (and Google/DDG/etc wasn’t fruitful because of how similar what I was looking for was worded to completely unrelated shit). I often wasn’t even asking the high level, generic “solve it all for me” shit that could arguably be entirely unique to my corporate environment. I asked it very specific things like “Which properties synced from On-Premise Active Directory to Azure Entra ID are required for mailbox generation in Exchange Online on a licensed account?”
This would be one of the places I’d expect it to shine, with minimal issues.
I think I got two whole answers out of it that were useful as is without any errors. Out of probably 30 queries. Everything else needed multiple rounds of follow up and cross checking with other sources.
But it was all very distinctly errors that I would never have caught if I wasn’t the fucking one man army behind my workplace’s Actice Directory and Exchange user lifecycle automations. I’m intimately familiar with a lot of this shit, just not the specifics of what makes a hybrid Exchange setup tick.
If one of my junior team members had been on this project I shudder to think of how many man hours would have been wasted putting out fires due to the incorrect information.
There’s a theory/“law”/rule or something like that about how people could read a newspaper article about their field of study or work and be annoyed/astounded/shocked at just how much info it got incorrect, then move on to a different article about something they weren’t familiar with and take it at face value.
The issues with LLM AI chatbots are not new. But they have been intensified to an absurd and hard to quantify degree due to the magic of technology.
All while economic incentives and greed drive unsustainable amounts of money and resources into them for results that are laughably bad.
Outside of those economic and ecological impacts (including how it’s being used to devalue all sorts of work), I don’t think AI spells doom for us all (ecological impact might though). But I do feel that its use, overutilization, and the complete lack of anyone pumping the breaks (besides people who stand to benefit from the proliferation doing constant criti-hyping) to critically evaluate the effects and best us is going to make a lot of societal and education problems a hell of a lot worse.
I dont think that getting a summary overview of something and learning it are exactly the same thing.
The famous example is Rome and how it fell.
If you ask for a basic summary you’ll get 100s of different answers if not 1000s because thats how many factors were in play over the extremely long course of the fall of Rome. Skipping over those details only leads to… well… skipping over details…
Even if AI was 100% perfect always right, theres no way a summary can substitute actually learning something.
Can’t you just continue digging to learn more? Follow up questions as you would with a private tutor?
If someone gave you an incorrect starting line, digging further can’t get you to the correct ending.
I think the problem is having time to learn. Almost everything I do is results driven. I think ai as collaboration is good for getting results and learning. Its just like any collaboration. You don’t assume the other person is right about everything but you discuss. If people treat it as an oracle or guru then they will get themselves into trouble.
as a pedagoge
so the problem is the inherent larger picture of learning. when you sit at school the teacher, classroom and system is in general designed to teach you the basic skills of life.
for that to work it kinda needs to be trusted and transparent. not to mention critical thinking which is supposed to be in all subjects.
AI can’t think. AI can’t hold a story together for long without dropping the point.
LLMs could potentially be useful in engaging students in spesific materials before a lesson - especially for critical thinking.
I’d say it’s in that regard more important to use it when it gets it wrong, start a class with “ai work” and let the kids work on a subject you’ll tell them the details of afterwards. problem is ofcourse that not all kids will listen to you and still mislearn from the bot.
Tldr. yes ai can potentially be good. but we know they bad already - especially in a reduction of learning in kids currently exposed to it. so no. llms will be mostly bad for learning.
Teachers are sometimes wrong.
Teachers sometimes have personal issues (divorce, bills, etc) and consciously or not take it out on the kids.
Teachers have finite time and patience and can’t divine the individual learning styles of all 20+ kids in the classroom.
Teachers should be looking at using AI tools to save their jobs the exact same way software engineers are right now.
Teachers already are, and kids are already seeing drastic reductions in the quality of material they are being given to work with.
You’re right that these are not new problems, but they are now happening at far greater scale/pace.
Generally I agree that it can be an incredible tool for learning, but a big problem is one needs a baseline ability to think critically, or to understand when new information may be flawed. That often means having at least a little bit of existing knowledge about a particular subject. For younger people with less education and life experience, that can be really difficult if not impossible.
The 10% of information that’s incorrect could be really critical or contextually important. Also (anecdotally) it’s often way more than 10%, or that 10% is distributed such that 9 out of 10 prompts are flawless, and the 10th is 100% nonsense.
And then you have people out there creating AI chat bots with the sole intention of spreading disinformation, or more commonly, with the intention of keeping people engaged or even emotionally dependent on their service — information accuracy often isn’t the priority.
The rate at which AI-generated content is populating the internet is increasing exponentially, and that’s where most LLM training data comes from currently, so it’s hard to see how the accuracy problem improves going forward.
All that said, like most things, when AI is used in moderation by responsible people, it’s a fantastic tool. Unfortunately, the people in charge are incentivized to be unscrupulous and irresponsible, and we live in a decadent society that doesn’t exactly promote moderation, to massively understate things…
(yeah, I used an em-dash, you wanna fight bro? 😘)
Good point, as an adult that grew up long before LLMs and social media, I feel that it’s an incredible tool, I just don’t trust it fully. Critical thinking and fact checking is a reflex at this point, I must admit that I don’t always fact check unless something seems shocking or unexpected to me. The accuracy problem is something I doubt they can fix short term






