

Officially, you can’t. Unofficially, just have one of the ferrymen tow a boat.
Or swim back. However, the bot itself appears to have ruled out all of these options.
Officially, you can’t. Unofficially, just have one of the ferrymen tow a boat.
Or swim back. However, the bot itself appears to have ruled out all of these options.
At first glance it seems impossible once N≥2, because as soon as you bring a boat across to the right bank, one of you must pilot a boat back—leaving a boat behind on the wrong side.
In this sentence, the bot appears to sort of “get” it (not entirely, though, the wording is weird). However, from there, it definitely goes downhill…
I think they consider “being well-read” solely as a flex, not as a means of acquiring actual knowledge and wisdom.
They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want.
I think it’s either that, or they want an answer they could impress other people with (without necessarily understanding it themselves).
As I’ve pointed out earlier in this thread, it is probably fairly easy to manipulate and control people if someone is devoid of empathy and a conscience. Most scammers and cult leaders appear to operate from similar playbooks, and it is easy to imagine how these techniques could be incorporated into an LLM (either intentionally or even unintentionally, as the training data is probably full of examples). Doesn’t mean that the LLM is in any way sentient, though. However, this does not imply that there is no danger. At risk are, on the one hand, psychologically vulnerable people and, on the other hand, people who are too easily convinced that this AI is a genius and will soon be able to do all the brainwork in the world.
These systems are incredibly effective at mirroring whatever you project onto it back at you.
Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn’t surprising when a well-trained LLM “picks up” similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots “just for fun”, by the way).
Of course, “love bombing” is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).
Please help me understand this: It was supposedly fine, because “only one minor was molested”, and this confession made everyone more trustworthy? Am I missing something?
Somehow, the “smug” tone really rubs me the wrong way. It is of great comedic value here, but it always reminds me of that one person who is consistently wrong yet is somehow the boss’s or the teacher’s favorite.