• @SpaceNoodle@lemmy.world
    link
    fedilink
    English
    05 months ago

    It won’t hallucinate less with additional training input.

    An LLM is good at making sentences that seem convincing, but has no ability to reason.

    • @msage@programming.dev
      link
      fedilink
      English
      05 months ago

      Thanks for ignoring the same argument over and over again, it makes you look very stuck-up.

      Intelligence does not require perfection (you are an example). You also hallucinate random output, but you can learn to stop specific hallucinations - like reading a Wiki page.

      LLM aren’t different in that regard - they were trained on inputs, and if you extend their training sets, they will be more exact in those areas.

      Ability to reason is a very hard concept to specify, and we don’t have any foolproof test (that I know of) that would definitely say if LLMs can reach that stage.

      I will fight you if you try to tell me that humans are smarter than any current AI - because there are some real dumb people walking this earth and mindlessly reproducing, unable to process basic concepts that they depend their lives on.

      Nothing of this changes the fact that there is an intelligence - natural language is an incredibly hard thing to code deterministically - and as such deserves the ‘AI’ label without a doubt.