I think the lack of friction AI has is a real problem.
AI models output is always overly confident. And when you correct them they will almost always come up with something like "Ah, you're totally right" and switch around the output (unless there are safeguards / deep research involved).
AI doesn't push back, therefore you more often than not don't second guess your own thoughts. This is, in essence, the most valuable tool in discussions with other humans.