I know this is unpopular as hell, but I believe that LLMs have potential to do more good than bad for learning, a long as you don’t use it for critical things. So no health related questions, or questions that is totally unacceptable to have wrong.

The ability to learn about most subjects in a really short time from a “private tutor”, makes it an effective, but flawed tool.

Let’s say that it gets historical facts wrong 10% of the time, is the world more well off if people learn a lot more, but it has some errors here and there? Most people don’t seem to know almost no history at all.

Currently people know very little about critical topics that is important to a society. This ignorance is politically and societally very damaging, maybe a lot more than the source being 10% wrong. If you ask it about social issues, there is a more empathetic answers and views than in the main political discourse. “Criminals are criminals for societal reasons”, “Human rights are important” etc.

Yes, I know manipulation of truth can be done, so it has to be neutral, which some LLMs probably aren’t or will not be.

Am I totally crazy for thinking this?

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I think the problem is having time to learn. Almost everything I do is results driven. I think ai as collaboration is good for getting results and learning. Its just like any collaboration. You don’t assume the other person is right about everything but you discuss. If people treat it as an oracle or guru then they will get themselves into trouble.