• sebi@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    1 year ago

    Because generative Neural Networks always have some random noise. Read more about it here

      • PetDinosaurs@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        It almost certainly has some gan-like pieces.

        Gans are part of the NN toolbox, like cnns and rnns and such.

        Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.

          • PetDinosaurs@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            1 year ago

            It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.

            And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.