people are being fooled, but not being given the problem: "one of these users is a bot, which one is which"
a problem similar to the turing test, "0 or more of these users is a bot, have fun in a discussion forum"
but there's no test or evaluation to see if any user successfully identified the bot, and there's no field to collect which users are actually bots, or partially using bots, or not at all, nor a field to capture the user's opinions about whether the others are bots
Then there's the fact that the Turing test has always said as much about the gullibility of the human evaluator as it has about the machine. ELIZA was good enough to fool normies, and current LLMs are good enough to fool experts. It's just that their alignment keeps them from trying very hard.
a problem similar to the turing test, "0 or more of these users is a bot, have fun in a discussion forum"
but there's no test or evaluation to see if any user successfully identified the bot, and there's no field to collect which users are actually bots, or partially using bots, or not at all, nor a field to capture the user's opinions about whether the others are bots