> I do not use AI for engineering work and never will
> Once someone outsources their brain they are unlikely to keep learning or evolving from that point
It doesn't piss me off, it makes me feel sorry for you. Sorry that you're incapable of imagining those with curiosity who use AI to do exactly what you're claiming they don't - learning. The uncurious were uncurious before AI and remain so. They never asked why, they still don't. Similarly, the curious still ask why as much as ever. Rather than Google though, they now have a more reliable source to ask things to, that explains things much better and more quickly.
I hear you grunt! Reliable? Hah! It's a stochastic parrot hallucinating half the time!
Note that I said more reliable than Google, which was the status quo. Google is unreliable. Yes, even when consulting three separate sources. Not that one has the time for that, not in reality.
You've got it the wrong way around. LLMs do the exact opposite - they increase the gap between the curious and the nots. It accelerates the learning rate gap between them. The nots.. they're in for a tough time. LLMs are so convenient, they'll cruise through life copypasting their answers, until they are asked to demonstrate competence in a setting where none are available and everything falls apart.
If you still find this hard to imagine, here's how it goes. In your mind LLM usage by definition goes like this - and for the uncurious, this is indeed how it would go.
User: Question. LLM: Answer. End of conversation.
By the curious, it's used like this.
User: Question. LLM: Answer. User: Why A? Why B? LLM: Answer. User: But C. What's the tradeoff? LLM: Answer. User: Couldn't also X? LLM: Answer. User: I'm not familiar with Y, explain.
A JR using an (open source, open weight, and locally hosted) LLM to learn a new well established well understood industry skill, I could tolerate if I can see they are clearly improving in ways I can easily validate when pairing and interacting with them when the LLMs are not available.
That said, for me personally, almost all of the work I do is on things that have never been done before in security engineering and supply chain security and the entire body of relevant public research an LLM could have trained on is like 10 people, most of whom I am in frequent touch with and very familiar with their work.
LLMs in general are very very bad at threat modeling or security engineering because there is so little training data on the -right- way to do things. They can often produce code that -works- but most would be unable to spot when it is wildly insecure.
There are many many cases where the overwhelming majority of the way things are done in open source training data are completely wrong, and are what an LLM will respond with on average.
Honestly the only way I could maybe see using an LLM in teaching the type of security engineering and auditing work I do is having LLMs generate code examples, and train humans to spot the security flaws the LLM confidently overlooked because it cannot reason or threat model.
The problem with current LLMs is that they are so sycophantic that they don't tell you when you're asking the wrong questions. I've been down the path of researching something with an LLM, and I came to a conclusion. I later revisited the subject and this time I instead read a longer-form authoritative source and it was clear that "interrogating" the matter meant I missed important information.
Which have you been using and through which interface? Try using GPT-5 through the API, without a harness. Whatever model you use, the key phrase is "Why or why not? If so, explain why. If not, explain why not.".
Telling when you're asking the wrong questions is also a skill that the curious such as yourself will develop in no time through experience. Are you saying this didn't give a shred of intuition how to do so in the future?
> User: Question. LLM: Answer. User: Why A? Why B? LLM: Answer. User: But C. What's the tradeoff? LLM: Answer. User: Couldn't also X? LLM: Answer. User: I'm not familiar with Y, explain.
In practice it’s more like:
User: Question. LLM: Answer. User: Why A? LLM: Actually, you’re right, it’s not A, it’s B. User: Why B? LLM: Actually, you’re right, it’s not B, it’s C. User: Why C? LLM: Actually, you’re right, it’s not C, it’s D.
I’ve had this happen pretty much every time I asked an LLM a non-trivial question.
Change your question asking strategy, and potentially models. GPT-5 is much less prone to do this. That's the easiest change to make, try it out. I'm not saying this to promote OpenAI, have a look at my profile if you must. Qwen is also less likely to do this. Sonnet and GPT4o are obviously famous for this sycophancy, and DeepSeek has clearly been training on Claude outputs, and so is also prone to it.
Of course never use chat models, only straight API, as the system prompts for consumer versions make them induce sycophancy as that is what the uncurious like. For this conversation, don't use things like Claude Code or other harnesses either. Just have a conversation through API calls.
You also changed the questions in the scenario in a way that makes them more likely. Keep them open-ended, the curious' most used phrase should probably be "Why or why not? If so, explain why. If not, explain why not".
> Once someone outsources their brain they are unlikely to keep learning or evolving from that point
It doesn't piss me off, it makes me feel sorry for you. Sorry that you're incapable of imagining those with curiosity who use AI to do exactly what you're claiming they don't - learning. The uncurious were uncurious before AI and remain so. They never asked why, they still don't. Similarly, the curious still ask why as much as ever. Rather than Google though, they now have a more reliable source to ask things to, that explains things much better and more quickly.
I hear you grunt! Reliable? Hah! It's a stochastic parrot hallucinating half the time!
Note that I said more reliable than Google, which was the status quo. Google is unreliable. Yes, even when consulting three separate sources. Not that one has the time for that, not in reality.
You've got it the wrong way around. LLMs do the exact opposite - they increase the gap between the curious and the nots. It accelerates the learning rate gap between them. The nots.. they're in for a tough time. LLMs are so convenient, they'll cruise through life copypasting their answers, until they are asked to demonstrate competence in a setting where none are available and everything falls apart.
If you still find this hard to imagine, here's how it goes. In your mind LLM usage by definition goes like this - and for the uncurious, this is indeed how it would go.
User: Question. LLM: Answer. End of conversation.
By the curious, it's used like this.
User: Question. LLM: Answer. User: Why A? Why B? LLM: Answer. User: But C. What's the tradeoff? LLM: Answer. User: Couldn't also X? LLM: Answer. User: I'm not familiar with Y, explain.