The tech giant is among companies pushing out AI tools while promising to build more tools to protect against their misuse

WP gift article expires in 14 days.

https://ghostarchive.org/archive/5UW77

  • @wahming
    link
    English
    4710 months ago

    Seems a lot of people are misinterpreting this.

    The goal is not to protect the general public from misinformation. The goal is to prevent the pool of new training data from getting TOO contaminated with AI generated images, which would make it worthless for training new AI

    • @jana@leminal.space
      link
      fedilink
      English
      1810 months ago

      The article itself makes the connection:

      As the 2024 presidential campaign ramps up, concern is quickly rising that such images might be used to spread false information.

      Though, I guess shame on us for expecting better journalism these days.

      • @wahming
        link
        English
        410 months ago

        Who knows, probably a paid article

    • @Mirodir@lemmy.fmhy.net
      link
      fedilink
      510 months ago

      I don’t think that:

      The tool embeds a digital “watermark” directly into the image that can’t be seen by the human eye but can be picked up by a computer that’s been trained to read it.

      Is gonna be helpful for keeping AI generated images out of training sets. It would require the people who make the model to actually implement that tool into their model.

      I don’t think most researchers not affiliated with google will chose to do that.

      • @wahming
        link
        English
        610 months ago

        Most major developers of AI generated imagery, at least the corporates, will do it as they share the common interest of not polluting their sample data. Open source imagery might make it optional, but the functionality will be implemented. Either the PRs will be submitted by one of the corporations, or marketing like this article will convince the devs to implement it.

        Remember, they don’t need EVERYBODY to implement it. As long as this reduces the amount of unmarked AIgen images by a reasonable percentage, it’s worth doing for them.