Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
It just occurred to me that one could purposely seed it with incorrect information to break its usefulness. I’m anti-AI so I would gladly do this. I might try it myself.
Luddite.
The luddites were right you know
Outliers are easy to work around.
How?