It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • @Nonameuser678@aussie.zone
    link
    fedilink
    2211 months ago

    It not being conscious or self aware. It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

    I’d also be VERY surprised if it isn’t harvesting people’s data in the exact way you’ve described.

    • @Reborn2966@feddit.it
      link
      fedilink
      611 months ago

      you don’t need to be surprised, in their ToS is written pretty big that anything you write to chatGPT will be used to train it.

      nothing you write in that chat is private.

    • @lloram239@feddit.de
      link
      fedilink
      411 months ago

      It not being conscious or self aware.

      That’s correct, its whole experience is limited to a ~2000 word text prompt (that includes your questions, as well as previous answers). Everything else is a static model with a bit of randomness sprinkled in so it doesn’t just repeat. It doesn’t learn. It doesn’t have long term memory. Every new conversation starts from scratch.

      User data might be used to fine tune future models, but it has no relevance for the current one.

      It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

      This is just wrong, but despite being frequently parroted. It obviously understands a lot. Having a little bit of conversation with it should make it very clear. You can’t generate language without understanding the meaning, people have tried before and never got very far. The only problem it has is that its understanding is only of language, it doesn’t know how language relates to other sensory inputs (GPT-4 has a bit of image stuff build in, but it’s all still a work in progress). So don’t ask it to draw pictures or graphs, the results won’t be any good.

      That said, it’s surprising how much knowledge it can extract just from text alone.