It’s possible it reduces the probability of things like wrongly answered stack overflow questions from being used, so it might actually work a bit.
Kinda like how with image generation, you get vastly better results by adding a negative prompt such as “low quality, jpeg artifacts, extra fingers, bad hands” etc, because the dataset from boorus actually do include a bunch of those tags and using them steers the generation to do thing that don’t have features that match them.
“Make no mistakes” gives big “do not hallucinate” energy.
“Generate an image with no dog in it.”
It’s possible it reduces the probability of things like wrongly answered stack overflow questions from being used, so it might actually work a bit.
Kinda like how with image generation, you get vastly better results by adding a negative prompt such as “low quality, jpeg artifacts, extra fingers, bad hands” etc, because the dataset from boorus actually do include a bunch of those tags and using them steers the generation to do thing that don’t have features that match them.
deleted by creator
But this remarks seem to increase the quality of LLM outputs.
The most relevant words in that sentence.
I guess that’s suiting to nearly everything AI related 😅