🍹Early to RISA 🧉@sh.itjust.worksM to Greentext@sh.itjust.works · 12 hours agoAnon finds a botsh.itjust.worksimagemessage-square43linkfedilinkarrow-up1677arrow-down10
arrow-up1677arrow-down1imageAnon finds a botsh.itjust.works🍹Early to RISA 🧉@sh.itjust.worksM to Greentext@sh.itjust.works · 12 hours agomessage-square43linkfedilink
minus-squareatthecoast@feddit.nllinkfedilinkarrow-up5·9 hours agoIf you then train new bots on the generated content, the models will degrade yes?
minus-squarefrog@feddit.uklinkfedilinkarrow-up3·4 hours agoIf you look a lot of new posts, they are actually highly upvoted old posts. So bots probably stay the same.
minus-squarePrimeMinisterKeyes@leminal.spacelinkfedilinkEnglisharrow-up1arrow-down3·7 hours agoThe bots know what is bot content and what is not. Actual users don’t.
minus-squareLvxferre [he/him]@mander.xyzlinkfedilinkarrow-up6·6 hours ago The bots know what is bot content and what is not. Probably not. It’s way easier to generate bot content than to detect it. Unless they’re coming from the same group, but I find this unlikely.
If you then train new bots on the generated content, the models will degrade yes?
If you look a lot of new posts, they are actually highly upvoted old posts. So bots probably stay the same.
The bots know what is bot content and what is not.
Actual users don’t.
Probably not. It’s way easier to generate bot content than to detect it. Unless they’re coming from the same group, but I find this unlikely.