I feel like there is some contingent of people who are really bent on downplaying the achievements of AI as of late. Its objectively insane, yet somehow every discussion is still sprinkled with some form of "It told me 8x8=60 so I closed the window and never used it again"
True, but the problem is that they never know. They don't interact with the real world and have no way of verifying their training data's accuracy. Perhaps they could assign a confidence level to their response? But then, if they assign a response a high confidence level and produce an incorrect answer, it will compound their failure. Not only were they wrong, but they vouched for their wrong answer.
Google search was incredibly valuable immediately even if most links could have been rubbish. I can't say the same with the current LLMs
It is an incredible achievement that LLMs produce human-like output (e.g., wouldn't know if a gpt bot answers me unless we are discussing a topic where precision/accuracy are important)
but they hallucinate (they are confident BS-generators).
The hype is that LLMs can solve any problem and replace humans (jobs). It is not so.
It may depend on what you do but I find it is easier/faster to do the work myself then to spot and fix [a possibly subtle] error in AI output. Though some of the specific things will improve in time and you can find tasks where AI is useful even today.
I don't see how the models can improve for general tasks (AGI) without being existential threat to humans (not just jobs).