The only thing AI writing seems to be useful for is wasting real people’s time.
I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering.
We found the source
they never did, they never will.
Why tho or are you trying to be vague on purpose
Because you’re training a detector on something that is designed to emulate regular languages closest possible, and human speech has so much incredible variability that it’s almost impossible to identify if someone or something has been written by an AI.
You can detect maybe your typical generic chat GPT type outputs, but you can characterize a conversation with chat GPT or any of the other much better local models (privacy and control are aspects which make them better) and after doing that you can get radically human seeming outputs that are totally different from anything chat GPT will output.
In short, given a static block of text it’s going to be nearly impossible to detect if it’s coming from an AI. It’s just too difficult to problem, and if you’re going to solve it it’s going to be immediately obsolete the next time someone fine tunes their own model
Because AIs are (partly) trained by making AI detectors. If an AI can be distinguished from a natural intelligence, it’s not good enough at emulating intelligence. If an AI detector can reliably distinguish AI from humans, the AI companies will use that detector to train their next AI.
I’m not sure I’m following your argument here - you keep switching between talking about AI and AI detectors. Each of the below are just numbered according to the order of your prior responses as sentences:
- Can you provide any articles or blog posts from AI companies for this or point me in the right direction?
- Agreed
- Right…
I’m having trouble finding your support for your claim
Because generative Neural Networks always have some random noise. Read more about it here
Isn’t that article about GANs?
Isn’t GPT not a GAN?
It almost certainly has some gan-like pieces.
Gans are part of the NN toolbox, like cnns and rnns and such.
Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.
The findings were for GAN models, not GAN like components though.
It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.
And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.
It’s not even about diffusion models. Adversarial networks are basically obsolete
I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.
Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!
AI company says their AI is smart, but other companies are sell snake oil.
Gottit
They tried training an AI to detect AI, too, and failed
Typically for generative AI. I think during their training of the Nobel, they must have developed another model that detect if GPT produce a more natural language. I think that other model may reached the point where it couldn’t flag it with acceptable false positive.
Regardless of if they do or don’t, surely it’s in the interests of the people making the “AI” to claim that their tool is so good it’s indistinguishable from humans?
Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.
I have to hand in a short report
I wrote parts of it and asked chatgpt for a conclusion.
So i read that, adjusted a few points. Added another couple points…
Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)
We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)
Just need to get AI on that.
This is the best summary I could come up with:
In a related FAQ, they also officially admit what we already know: AI writing detectors don’t work, despite frequently being used to punish students with false positives.
In July, we covered in depth why AI writing detectors such as GPTZero don’t work, with experts calling them “mostly snake oil.”
That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text.
Along those lines, OpenAI also addresses its AI models’ propensity to confabulate false information, which we have also covered in detail at Ars.
“Sometimes, ChatGPT sounds convincing, but it might give you incorrect or misleading information (often called a ‘hallucination’ in the literature),” the company writes.
Also, some sloppy attempts to pass off AI-generated work as human-written can leave tell-tale signs, such as the phrase “as an AI language model,” which means someone copied and pasted ChatGPT output without being careful.
The original article contains 490 words, the summary contains 148 words. Saved 70%. I’m a bot and I’m open source!
Aren’t there very few student priced ai writers? And isn’t the writing done on their servers? And aren’t they saving all the outputs?
Can’t the ai companies sell to schools the ability to check paper submissions against recent outputs?
Terence Tao just did a thread on Mathstodon talking about jow ChatGPT help him program a algorithm for looking for numbers.
Couldn’t you just ask ChapGPT whether it wrote something specific?
No. The model doesn’t have a record of everything it wrote.