ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that’s referenced in the article can be found here

  • @Asifall@lemmy.world
    link
    fedilink
    427 months ago

    It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.

    In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.

    • @far_university1990@feddit.de
      link
      fedilink
      147 months ago

      Nobody wants to be the one to say these models are illegal.

      But they obviously are. Quick money by fining the crap out of them. Everyone is about short term gains these days, no?

    • @library_napper
      link
      07 months ago

      Are they illegal if they were entirely free tho?