• @brucethemoose@lemmy.world
    link
    fedilink
    2
    edit-2
    6 days ago

    that’s a weird hill to die on, to be honest.

    Welcome to Lemmy (and Reddit).

    Makes me wonder how many memes are “tainted” with oldschool ML before generative AI was common vernacular, like edge enhancement, translation and such.

    A lot? What’s the threshold before it’s considered bad?

    • Superb
      link
      fedilink
      English
      -16 days ago

      Well those things aren’t generative AI so there isn’t much of an issue with them

      • @brucethemoose@lemmy.world
        link
        fedilink
        2
        edit-2
        6 days ago

        What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’

        What about a deinterlacer network that’s been trained on other interlaced footage?

        My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models).

        • Superb
          link
          fedilink
          English
          -2
          edit-2
          6 days ago

          Id say if there is training beforehand, then its “generative AI”