• @takeda@lemmy.dbzer0.com
          link
          fedilink
          7
          edit-2
          10 days ago

          I’m not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that’s how the information is stored. So LLM never sees the actual word.

          • @CosmicTurtle0@lemmy.dbzer0.com
            link
            fedilink
            English
            910 days ago

            Adding to this, each word and words around it are given a statistical percentage. In other words, what are the odds that word 1 and word 2 follow each other? You scale that out for each word in a sentence and you can see that LLMs are just huge math equations that put words together based on their statistical probability.

            This is key because, I can’t emphasize this enough, AI does not think. We (humans) anamorphize them, giving them human characteristics when they are little more than number crunchers.