This was the weirdest thing I’ve seen today. These are only the ones I’ve spotted.

funnily enough, these bots are also replying to an obvious repost from another bot account. It’s at the top right now! Beautiful

https://www.reddit.com/r/goodnews/comments/1p8dt2a/_/

tipping points:

  1. consuming so much AI content has led to me able to see subtle patterns
  2. They’re all saying “exactly” and saying the same thing"
  3. their usernames are similar, flower/nature related, two words, no profile pictures
  4. All of their profiles have the exact same format of comments with the agreement, summary
  5. and they all have porn on their profile. oh

edit: tf?

  • tangonov@lemmy.ca
    link
    fedilink
    arrow-up
    83
    arrow-down
    1
    ·
    2 days ago

    You’re absolutely right! Those posts do have many indicators of having been written by an AI. You’re doing a great job finding these comments

    • TheObviousSolution@lemmy.ca
      link
      fedilink
      arrow-up
      37
      ·
      2 days ago

      that is completely correct! it’s awe—inspiring to see this level of rigorous investigation. that sort of attention to detail gives me hope that something can be done about this problem. this post fills me with hope for humanity

      • theMoops@lemmings.world
        link
        fedilink
        arrow-up
        19
        arrow-down
        1
        ·
        2 days ago

        Totally feel this. It’s kinda wild how refreshing it is to see someone actually dig in instead of just shouting hot takes into the void. When people put in real effort — like, actual research, receipts, context, the whole thing — it reminds you that not everyone is just doomscrolling and giving up.

        Honestly, posts like this are the rare moments where you remember, “Oh right, humans can be competent and thoughtful.” Gives me a tiny spark of optimism I didn’t expect today.

        More of this energy, please.

          • PlaidBaron@lemmy.world
            link
            fedilink
            arrow-up
            21
            ·
            2 days ago

            To make a grilled cheese you need the following ingredients:

            • Bread
            • Cheese
            • Butter

            Step 1: Melt butter in a pan.

            Step 2: Place the bread on the butter. Soak the butter in.

            Step 3: Add the cheese slice to the bread.

            Step 4: Put the bread on top of the cheese.

            Step 5: Grill the sandwhich on both sides until golden brown.

            Some good additions to grilled cheese are tomatoes, ham, or uranium-238.

            • tangonov@lemmy.ca
              link
              fedilink
              arrow-up
              12
              ·
              edit-2
              1 day ago

              Absolutely! Of all of the grilled cheese recipes out there this is by far one of the recipes out there. A little bit of cheese can really make your day better. Just like that time that Mankind faced off against The Undertaker the WWF pay per view special “Hell in a Cell”, 1998. Mankind climbed to the top of the 16 foot cage and taunted the Undertaker to wrestle him up high, only to end up getting thrown into the commentator tables below. Everyone thought the match was over but just before Mankind was taken out in a stretcher he got up and ran back for more. I’ll never forget how The Undertaker choke slammed him through the top of the cage and onto the thumb tacks below. It just really gives me hope for humanity.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    Thirtyish years ago, we played a multiplayer online game called “LPMud”. There were three talking NPCs in the game: Harry, basically a simple programming example on talking and reacting NPCs, Sir Obliterator, a dark knight with a more advanced vocabulary and a few talking points about a quest, and Eliza, basically a NPC with an Eliza engine.

    Usually, they never met. Harry “lived” in the core area of the game, Sir Obliterator in or around the quest area to which he belonged, and Eliza was normally not even active.

    Some wizard had summoned them all to the guild hall, the entrance area of the game for fun, and they were rather busy “talking” with each other.

    They were annoying, but also hilarious…

    • ...m...@ttrpg.network
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      …i spent many hours in the darker realms donning the afro, jive ring, and few other similar text-parsing items simultaneously to wildly comedic effect…

    • 1995ToyotaCorolla@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      It’ll be so nice when the bots can post for all of us on the internet! It’ll give us plenty of free time to spend in the mines

  • BluesF@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    Reddit has so rapidly descended into nothing but bots. Especially on certain subs, for some reason… Even some quite niche ones just seem to be bots talking to each other. Fortunately the only sub I really want to keep my reddit account for is mostly safe, but even there we had some issues.

    In some cases I get what’s happening - bot post with featuring some kind of obscure product, then buried in the comments you find the bot replies letting people know (apparently organically) where they can buy it - but in other cases like this it just seems pointless. I suppose the idea is to make the profiles seem natural, but they’re almost all private anyway.

  • tym@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    2 days ago

    Where LLM though? All I see in this screenshot is a plane full of essential oils saleswomen from Utah…

  • Echo Dot@feddit.uk
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    2 days ago

    Someone’s told the last AI not to use capital letters because some someone somewhere thinks that makes it look more human. Forgetting of course that autocorrect would change most non-capitalized words into capitalised ones automatically — so it’s just suspicious.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      If enough of them make this mistake they will start training with it too.

      If money doesn’t run out by then, there will be a point where 99% of reddit comments are chatbots and also due to incest-data everyone can tell which one at glance.

      And no amount if scrape-training can happen after then.

  • mrgoosmoos@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    this is probably the only way to get a usable account on reddit nowadays, since you need to trawl through shit accumulating karma somehow before they even let you post a comment

    or at least that was my experience when trying to make a new account on a completely fresh device + IP to try and do some troubleshooting. it was unsuccessful.

    • Xylight@feddit.onlineOP
      link
      fedilink
      English
      arrow-up
      28
      ·
      2 days ago

      No, when it comes to LLMs there’s hardly any “dead giveaways” now. You have to learn to recognize the patterns.

      Omitting the final punctuation is quite a common thing people do, in fact you did in your comment. It’s probably just a part of the system prompt.

    • grepe@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      i don’t think i (or perhaps anyone) can recognize any single particular comment as being llm generated… but when the bots come in force it is still really easy. basically it boils down to this: many replies keep reiterating the same exact points in slightly different way with the same exact keywords. if you would use chatgpt to summarize each response you’d get basically the same thing from all bot replies.

      • jgandert@lemmy.zip
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        I agree. I believe it’s difficult for me—or anyone else—to pinpoint a specific comment as being generated by an LLM. However, when numerous bots are involved, the pattern becomes clear. Essentially, many responses end up repeating the same points, just phrased differently and using the same keywords. If you were to use ChatGPT to summarize each response, you’d essentially get a very similar outcome from all the bot-generated replies.

    • SGforce@lemmy.ca
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      2 days ago

      Think it’s probably a bug in the script they’re running. It’s cleaning one character too many off the end.

        • SGforce@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          LLMs ramble unless you stop them forcefully. That can lead to partial sentences that need to be cleaned up.

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            That’s not a problem inherent to LLMs, people building things with LLMs don’t normally need to account for this.

            I can’t say it never happens, but if you’re using an appropriately trained LLM with an appropriate system prompt, this concern should be uncommon enough that trying to compensate for it with code will be more likely to introduce problems than just leaving it.

            • SGforce@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              You just explained how it is a problem inherent to most LLMs. Most spammers aren’t able or willing to train a model.

              Every large hosted LLM drones on and on. It helps them land on the correct answer more often. And they always return to the mean of their training even with prompting. Try telling a model not to reply with “Sure thing!” or some other shit and it’ll do it anyway. Far easier to just cut that shit out.

              • PeriodicallyPedantic@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                1 day ago

                There are lots of (relatively) high quality free models they can host themselves, or use hosted models. They don’t need to train their own models or use models without applicable training data.

                If your bar for “droning on and on” is them saying “ok” then sure I guess? But that seems like a crazy bar.
                What system prompt are you using, when you’re getting responses that “drone on and on”?

                Don’t get me wrong, I hate AI.
                But I also worked on LLM integrations for a year, so I had to develop a reasonable grasp of their capabilities and use, beyond just using the chat apps, even if I wouldn’t call myself an expert

  • yetAnotherUser@discuss.tchncs.de
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    3 days ago

    Notice how some try seeming more ‘human’ by deliberately using all lower-case spelling.

    Also, it looks like the RosalieBloomm LLM is using the “real” apostrophe instead of the one on keyboards '. Nobody does that

    • DrDickHandler@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      This is just the lazi AI replies that sticks out and you aren’t seeing the bigger picture. Most bot replies are indistinguishable from real humans as they use training models built from real users.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        You’re totally right! The eloquence and creativity showcased in AI comments underscores how they cannot be distinguished from comments by human users.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      12
      ·
      3 days ago

      Not in chat, but professional writers will. They know all the short keys for those type characters, including that m dash everyone now associates with being AI. It just shows how much of the training data used came from professional papers and not general discussion areas.

    • swampdownloader@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Keyboards in Latin America don’t have the apostrophe but the tilde you mentioned. So it could be from someone outside of the US/not an American keyboard.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      My keyboard has both characters. ’ and ‘ but yeah I never use them.

      I don’t think they actually look good, yeah, it just looks weird.

  • Xylight@feddit.onlineOP
    link
    fedilink
    English
    arrow-up
    126
    arrow-down
    2
    ·
    3 days ago

    All of social media is dying before our eyes. every platform is basically fully botted or actively dying, even lemmy seems to have fallen off quite a bit

    • Rimu@piefed.social
      link
      fedilink
      English
      arrow-up
      90
      ·
      edit-2
      3 days ago

      I am working on LLM detection for the threadiverse. But other than one idiot last week spamming LLM posts and comments there hasn’t been much.

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        What angle are you approaching it from?

        I think LLM detection is gonna be tough, I think a verification route and cutting bots off from the source is going to be ideal but not sure how to tackle that while respecting privacy.

        I did have an idea of a app using nearby devices and after flagging 7 unique devices it’s tagged as verified, interesting but not sure it’ll work in rural areas.

      • pelespirit@sh.itjust.works
        link
        fedilink
        arrow-up
        32
        ·
        3 days ago

        There are in politics conversations, but still not nearly as bad as reddit. Even before I left, it was like a weird kind of prolific dead. Kind of like the conversations in OP’s pics.

      • s@piefed.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 days ago

        I appreciate all of the extra work you do in terms of Threadiverse infrastructure and quality of life.

        Many Reddit bots have also straight copy+pasted content from Reddit or other social media with only trivial changes to the text or image, if any change, so the Threadiverse needs to be able to catch those as well. A better internal search engine, especially one that can search for strings of text [edit: and one which can search through deleted and removed content], would help users track down if an account’s content was routinely copy+pasted. I think a new instance (unaffiliated with any particular instance) staffed by users familiar with bot detection to flag bot accounts for federated instances to then ban would be the best facsimile of Reddit’s now defunct BotDefense subreddit, which was a critical tool for users to tackle the site’s bot problem.

        This account I noticed yesterday is an example of a Threadiverse account just copy+pasting content (or in this case, crossposting to the original community) with little to no change. I have reported it to its host instance as suspicious but it has yet to be removed. An independent and informed instance for flagging bot accounts could more effectively communicate to the host instance as well as to Federated instances that this account is ticking the boxes of a bot account and should be blocked, banned, or at the very least closely monitored.

        A detector for bot networks, such as in the screenshot above, would also be helpful. Some sort of indicator of if several accounts are interacting with each other or on the same posts as each other far more often than they are interacting with other accounts and other posts would be helpful.

        Maybe like the New Account Highlightenator on the Voyager app, there can be an indicator for when an account has fewer than X amount of posts or comments (i.e. a potential new bot account), as well as an indicator of if the account has returned from a long hiatus of posting/commenting (i.e. a potential former human account that was bought or hacked to become a bot account).

        I’ll try to think of more signs of bots and more ways the Threadiverse can build infrastructure against them.

    • fedorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      edit-2
      3 days ago

      Just going to put my tinfoil hat on for a sec…

      Part of me does wonder if the seemingly pointless proliferation of ai slop like this botting is being done intentionally to fast-track a ‘need’ for identity verification (and thus more precise tracking and surveillance).

      ID verification is already being pushed on a few fronts (like to ‘save the kids from social media’ or whatever), maybe this is just one of many irons in the fire.

      With ID verification then Facebook etc could angle themselves as ‘save havens’ from an ai slop enshittified internet. You’d essentially have to completely give up your anonymity to participate in interactions with other verified humans.

      So your choices become:

      1. Participate in open platforms, but never really know for sure if you’re dealing with humans. At some point LLMs may be good enough that it’s impossible to know.
      2. Participate in closed platforms, where you can be reassured you’re engaging with real humans - but you’re also under total surveillance.

      Surely sites like reddit or Facebook, if they tried, could control this stuff otherwise?

      • PrimeMinisterKeyes@leminal.space
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Wow. I didn’t even think of that.
        Bots, of course, will be exempted from verification. See also: dating sites & apps. Unless governments (which ones?) handle and enforce things. But will they have the resources?

      • groet@feddit.org
        link
        fedilink
        arrow-up
        18
        ·
        3 days ago

        Participate in closed platforms, where you can be reassured you’re engaging with real humans - but you’re also under total surveillance.

        Closed platforms where the only propaganda bots are the ones controlled by the platform. They can then remove ADs from the business model and instead finance the platform by selling access to the bot accounts. And people will think they are in the perfect social media without advertising and only RealPeople™ that they can completely trust.

      • nickiwest@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        3 days ago

        Except that Meta has already admitted that it is using AI bot accounts to “drive engagement.”

        After an outcry from real users, Meta said it had removed some AI bot accounts. But nothing else they said indicated that the experiment is over.

        Eventually, social media is going to be nothing but company-generated AI bots, “bot farms” run by humans in developing countries, and (hopefully) a small number of actual users who can’t tell the difference between those things and real people.

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago
        1. Closed systems with vettings and circles of trust. I don’t need to know your identity. I just need to know that the person I know and trust knows and vouches for you.
      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        6
        ·
        3 days ago

        Well there’s holes in the fact that resale of accounts is an active and common phenomenon, and creating a fraudulent identity for an online service (even if you have to doctor an ID template) is a low-risk barrier of entry.

        Remember how people used death stranding photos to get around face ID? It’s the same concept.

      • krooklochurm@lemmy.ca
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        3 days ago

        I’m not giving my fucking id to anyone.

        There is no service I need to use badly enough to do that.

        I won’t use anything meta makes. Don’t use Snapchat. Reddit is fucking dead as far as I’m concerned. I’m not on twitter, blue sky.

        I know im not fully anonymous and there is shit I do that makes tracking me possible or even easy, but I’m not going to make it easy on anyone.

        Go fuck yorself with your surveillance shit.

    • njordomir@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      3 days ago

      I absolutely agree. When I come to Lemmy I look for this sort of insightful comment to restore my faith in humanity.

      It’s like comment ad libs.

      On a serious note, what you see and what I’m mocking is the easy to spot “low hanging fruit”. It would be arrogant for me to assume Ai comments aren’t getting past my mental filters on a daily basis.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      reddit has been know to have bots replying to bots, so i would imagine its the same if they are also using AI responding to AI responses.

      • Xylight@lemdro.id
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        from a different account since feddit.online wont show me this comment

        There’s nothing new happening on this platform, literally the frontpage consists of:

        • doomerism
        • trump and friends news
        • tech company does stupid thing
        • political meme
        • lets revolt guys. it’ll happen any moment now we swear.

        The majority of posts are made by the same people.

        As someone here since ~2022, I’ve seen the frequency of posts continue to fall.

        • TheFonz@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          I think it’s a consequence of the user base zeitgeist. Lemmy has a pretty strong adoption filtering system (I don’t care how “obvious” and “easy” it is to most of you; if you’re here, you’re clearly not who I’m talking about)

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    3 days ago

    This isn’t new. This has been going for maybe 10 years or so if you knew where to look and how to notice them. However, when Reddit changed its API policy in 2023, that wholly crippled any infrastructure to effectively deal with these accounts and allowed them to flourish without restraint.

    It’s also important to note that the Threadiverse is not immune from bot accounts like this sprouting up and we should take steps to educate users and to implement infrastructure to deal with them.

    • Kaput@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      3 days ago

      Exactly. It’s important to be aware of our own flaws to avoid societal traps. I am glad you are taking steps to insure the community will flourish. bot-accounts could be any one.

  • falseWhite@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    3 days ago

    I was typing a long comment about all this, but in the end I decided to sum it up:

    Fuck Reddit and Fuck AI.