Gans are part of the NN toolbox, like cnns and rnns and such.
Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.
It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.
And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.
Because generative Neural Networks always have some random noise. Read more about it here
Isn’t that article about GANs?
Isn’t GPT not a GAN?
It almost certainly has some gan-like pieces.
Gans are part of the NN toolbox, like cnns and rnns and such.
Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.
The findings were for GAN models, not GAN like components though.
It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.
And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.
It’s not even about diffusion models. Adversarial networks are basically obsolete