China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.

  • Jin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 minutes ago

    China, oh you Remembering something about go green and bla bla, but continue to create coal plants.

    The Chinese government has been caught using AI for propaganda and claiming to be real. So I don’t see it happening within the Chinese government etc.

  • 2xsaiko@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 hours ago

    Will be interesting to see how they actually plan on controlling this. It seems unenforceable to me as long as people can generate images locally.

    • umami_wasabi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 hours ago

      That’s what they want. When people doing it locally, they can discredit anything as AI generated. The point isn’t about enforability, but can it be a tool to control narative.

  • some_dude@lemm.ee
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    3 hours ago

    This is a smart and ethical way to include AI into everyday use, though I hope the watermarks are not easily removed.

    • umami_wasabi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      edit-2
      1 hour ago

      Think a layer deeper how can it misused to control naratives.

      You read some wild allegation, no AI marks (they required to be visible), so must written by someone? Right? What if someone, even the government jumps out as said someone use an illiegal AI to generate the text? The questioning of the matter will suddently from verifying if the allegation decribed happened, to if it itself is real. The public sentiment will likely overwhelmed by “Is this fakenews?” or “Is the allegation true?” Compound that with trusted entities, discrediting anything become easier.

      Give you a real example. Before Covid spread globally there was a Chinese whistleblower, worked in the hospital and get infected. He posted a video online about how bad it was, and quickly got taken down by the government. What if it happened today with the regulation in full force? Government can claim it is AI generated. The whistleblower doesn’t exist. Nor the content is real. 3 days later, they arrested a guy, claiming he spread fakenews using AI. They already have a very efficient way to control naratives, and this piece of garbage just give them an express way.

      You though that only a China thing? No, every entities including governments are watching, especially the self-claimed friend of Putin and Xi, and the absolute free speech lover. Don’t think it is too far to reach you yet.

      • LadyAutumn@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 minutes ago

        It’s still a good thing. The alternative is people posting AI content as though it is real content, which is a worldwide problem destroying entire industries. All AI content should by law have to be clearly labeled.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      It will be relatively easy to strip that stuff off. It might help a little bit with internet searches or whatever, but anyone spreading deepfakes will probably not be stopped by that. Still better than nothing, I guess.

  • puppinstuff@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 hours ago

    Having some AIs that do this and some not will only muddy the waters of what’s believable. We’ll get gullible people seeing the ridiculous and thinking “Well there’s no watermark so it MUST be true.”

  • Magister@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Me: “hé <AI name> remove the small text which is at the bottom right in this picture”

    AI: “Done, here is the picture cleaned of the text”

  • Lexam@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    6
    ·
    2 hours ago

    This is a bad idea. It creates a stigma and bias against innocent Artificial beings. This is the equivalent of forcing a human to wear a collar. TM watermark

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    3 hours ago

    Would it be more effective to have something where cameras digitally sign the photos? Then, it also makes photos more attributable, which sounds like China’s thing.

    • Dem Bosain@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      No, I don’t want my photos digitally signed and tracked, and I’m sure no whistleblower wants that either.

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 hours ago

        Of course not. Why would they? I don’t want that either. But we are considering the actions of an authoritarian system.

        Individual privacy isn’t relevant in such a country. However, it’s an interesting choice that they implement it this way.

      • umami_wasabi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        That’s a different thing. C2PA is proving a photo is came from a real camera, with all the editing trails. All in a cryptographic manner. This in the topic is trying to prove what not real is not real, by self claiming. You can add the watermark, remove it, add another watermark of another AI, or whatever you want. You can just forge it outright because I didn’t see cryptographic proof like a digital sign is required.

        Btw, the C2PA data can be stripped if you know how, just like any watermarks and digital signatures.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          45 minutes ago

          Stripping C2PA simply removes the reliability part, which is fine if you don’t need it. It is something that is effective when present and not when it isn’t.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      Apart from the privacy issues, I guess the challenge would be how you preserve the signature through ordinary editing. You could embed the unedited, signed photo into the edited one, but you’d need new formats and it would make the files huge. Or maybe you could deposit the original to some public and unalterable storage using something like a blockchain, but it would bring large storage and processing requirements. Or you could have the editing software apply a digital signature to track the provenance of an edit, but then anyone could make a signed edit and it wouldn’t prove anything about the veracity of the photo’s content.

      • henfredemars@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Hm, that’s true there’s no way to distinguish between editing software and photos that have been completely generated. It only helps if you want to preserve and modified photos. And of course, I’m making assumptions here that China doesn’t care very much about privacy.