Google’s DeepMind unit is unveiling today a new method it says can invisibly and permanently label images that have been generated by artificial intelligence.

  • Sethayy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    You heard of stable difusion? They got 1 line installs nowadays then all you have to enter is a prompt and go.

    Entirely open source so anyone could improve the model or not, and it’d be more than legal to release a non watermarked version (if a watermarked version even ever appeared).

    I saw down the chain it was compared to deuvono, which I’d argue is a bad analogy - cause whos gonna run a rootkit on their PC just to create an image, especially when there’s a million options not to (unlike games which are generally unique)

    • Puzzle_Sluts_4Ever@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Stable Diffusion is exactly what I was thinking of when I talked about removing “all the limiters”. Holy shit that was a dark ass weekend…

      But we are talking orders of magnitude of complexity and power. If all anyone needed to run a Bard/ChatGPT/whatever level LLM was a somewhat modern computer then everything would be retraining near constantly (rather than once every couple of months… maybe) and the SAG/WGA strikes would already be over because “Hollywood” would be making dozens of movies a week with AI media generation.

      Almost everything we are seeing right now is about preparing for The Future. People LOVE to fixate on “AI can’t draw hands” or whatever nonsense and that is very much a limited time thing (also, humans can’t draw hands. There is a reason four fingers are so common. It is more the training data than anything else). And having the major companies embed watermarking, DRM, and asset tracking in at a low level is one big “key” to that.

      Like, I expect an SD level tool to be part of Cortana for Windows 12 or whatever. “Hey Cortana, make me a meme picture of Pikachu snorting coke off a Squirtle while referencing a Ludacris song. You know, the one where he had the big gloves” and it working. But that won’t be the kinds of deep fakes or media generation that stuff like this is trying to preemptively stop.

      • Sethayy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I see what you mean yes, but of course such large resources are required to train the model - not run it. So reasonably as long as a bunch of users can pool resources to compete with big tech, there will always be an ‘un-watermark-able’ crowd out there, making all the watermakrs essentially useless because they only got half the picture.

        And how training these models works is insanely parallel, so reasonably - if (ideally a FOSS) project pops up allowing users to donate cpu time to train the model as a whole - users could actually have more computational power than the big tech companies

        • Puzzle_Sluts_4Ever@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          The resources to train these models are such that even Google/Amazon/MS are doing it selectively. Google…/AWS/Azure are some of the biggest resources on the planet and these companies get it for as close to “free” as it is ever going to get, and even they ration those resources.

          A bunch of kids on 4chan aren’t going to even make a drop in the bucket. And, in the act of organizing, the ringleaders will likely get caught for violating copyright law. Because, unless the open source model really is the best (and it won’t be), they are using proprietary code that they modified to remove said watermarks.

          As for “We’ll folding@home it!”: Those efforts are largely dead because people realized power matters (and you are almost always better off just donating some cash so they can buy some AWS time). But, again, you aren’t getting a massive movement out of people who just need to make untraceable deepfakes.

          Also, this also ignores all of the shady ass crap being done to get that training data. Like… there is a reason MS bought Github.

          • Sethayy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I think youre mixing together a couple angles here to try n make a point.

            ‘Unless the open source model is the best…theyre using proprietary code’ youre talking about a hypothetical program hypothetically being stolen and referencing it as a definite?

            As per the companies, of course they only use certain resoures, theyre companies they need returns to exist. A couple million down the drain could be some CEO’s next bonus, so they won’t do anything theyre into sure they’ll get something from (even if only short term)

            As per the 4chan, was that a coincidence or are you referencing unstable diffusion? Cause they did do almost exactly that (before of course it got mismanaged cause the nsfw industry is always been a bit ghetto)

            And like sure fold it at home or donate for aws, same end result really doesn’t matter what the user’s are comfortable with

            And whew finally sure ms bought github but like you think stable diffusion bought the internet? Courts have proven webscraping is legal…

            Ik this is a wall of text but like I said these arguments all feel like a bunch of thoughts tangentially related

            • Puzzle_Sluts_4Ever@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I am covering “a couple angles” because this falls apart under almost the most cursory of examination.

              But sure. If we are at a point where there is sufficient publicly available training data, a FOSS product performs comparable to the flagship products of 100 billion dollar companies, and training costs have been reduced to the point that a couple kids on 4chan can train up a new model over the course of a few days: Sure.

              Until we reach that point? Actually, even after we reach that point, it would still be unlikely. Because if training is that cheap then you can bet said companies have funded the development of new technologies that allow them to take advantage of their… advantages.