It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.
I don’t see how that’s different honestly, then again I’m not usually asking for absolute truth from LLMs, moreso explaining concepts that I can’t fully grasp by restating things in another way or small coding stuff that I can check essentially immediately if it works or not lol.
See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.
It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.
I don’t see how that’s different honestly, then again I’m not usually asking for absolute truth from LLMs, moreso explaining concepts that I can’t fully grasp by restating things in another way or small coding stuff that I can check essentially immediately if it works or not lol.
See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.