I’m finding it harder and harder to tell whether an image has been generated or not (the main giveaways are disappearing). This is probably going to become a big problem in like half a year’s time. Does anyone know of any proof of legitimacy projects that are gaining traction? I can imagine news orgs being the first to be hit by this problem. Are they working on anything?

  • @perestroika@slrpnk.net
    link
    fedilink
    5
    edit-2
    19 days ago

    Negative proof: the AI company signs it with their watermark.

    Positive proof: the photographer signs it with their personal key, providing a way to contact them. Sure, it could be a fake identity, but you can attempt to verify and conclude that.

    Cumulative positive and negative proof: on top of the photographer, news organizations add their signatures and remarks (e.g. BBC: “we know and trust this person”, Guardian: “we verified the scene”, Reuters: “we tried to verify this photo, but the person could not be contacted”).

    The photo, in the end, would not be just a bitmap, but a container file containing the bitmap (possibly with a steganographically embedded watermark) and various signatures granting or withdrawing trust.

    • @Korhaka@sopuli.xyz
      link
      fedilink
      English
      519 days ago

      Isn’t that more like trusting your source though, which media companies either do or don’t do already.

      • @perestroika@slrpnk.net
        link
        fedilink
        2
        edit-2
        19 days ago

        It would be a method of representing trust or distrust in a structured way that’s automatically accessible to the end user.

        The user could right-click an image, pick “check trust” from a menu, and be presented with a list of metainfo to see who has originally signed it, and what various parties have concluded about it.