• @Jakeroxs@sh.itjust.works
    link
    fedilink
    English
    2
    edit-2
    11 days ago

    I don’t see how that’s different honestly, then again I’m not usually asking for absolute truth from LLMs, moreso explaining concepts that I can’t fully grasp by restating things in another way or small coding stuff that I can check essentially immediately if it works or not lol.

    • @Nalivai@lemmy.world
      link
      fedilink
      English
      111 days ago

      See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
      And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.

      • @Jakeroxs@sh.itjust.works
        link
        fedilink
        English
        1
        edit-2
        10 days ago

        I mean I literally can test it immediately lol, a nodered js function isn’t going to be dangerous lol

        Or an AHK script that displays a keystroke on screen, or cleaning up a docker command into docker compose, simple shit lol