Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do not use books for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no writer has seen before.

If anyone gives me an opinion from a book, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.

If this pisses you off, ask yourself why.

(You can replace AI with any resource and it sounds just as silly :P)





Yes, if you find a book that is as bad as AI advice, you should definitely throw it away and never read it. If someone is quoting a known-bad book, you should ignore their advice (and as a courtesy, tell them their book is bad)

It's so strange that pro-AI people don't see this obvious fact and keep trying to compare AI with things that are actually correct.


It's so strange that anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book.

That "a good model (if you know how to operate it well)" is doing a lot of lifting. To be sure, there are a lot of bad books, and you can get negative advice from them, but a book has fixed content that can gain and lose a reputation, whereas a model (even a good one!) has highly variable content dependent on "if you know how to operate it well". So when someone or some group that I respect recommends a book, I can read the words with some amount of trust that the content is valuable. When someone quotes a model's response without any commentary or affirmation, it does not inspire any trust that the content is valuable. It just indicates that the person has abdicated their thought process.

I agree that quoting a model's answer to someone else is bad form - you can get a model to say ANYTHING if you prompt it to, so a screenshot of a ChatGPT conversation to try and prove a point is meaningless slop.

I find models vastly more useful than most technical books in my own work because I know how to feed in the right context and then ask them the right questions about it.

There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?


And as long as you don't copy-paste its advice into comments, that's fine.

No one really cares how you found all those .permission_allowed() calls to replace - was it grep, or intense staring, or AI model. All that matters is you stand behind it, and act as an author. Original post said it very well:

> ChatGPT isn’t on the team. It won’t be in the post-mortem when things break. It won’t get paged at 2 AM. It doesn’t understand the specific constraints, tech debt, or your business context. It doesn’t have skin in the game. You do.


Further, grep (and any of its similar siblings) works just fine for such a task, is deterministic, won't feed you bullshit, and doesn't charge you tokens to do a worse job than existing free tools will do well. Better yet, from my experience with the dithering pace of LLMs, you'll get your answer quicker, too.

>There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?

You're so close to realising why the book counter argument doesn't make any sense!


> anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book

Those people exist and they’re wrong.

More frequently, however, I find I’m judging the model less than its user. If I get an email that smells of AI, I ignore it. That’s partly because I have the luxury to do so. It’s largely because engaging has commonly proven fruitless.

You see a similar effect on HN. Plenty of people use AI to think through problems. But the comments that quote it directly are almost always trash.


"But the comments that quote it directly are almost always trash."

Because the output is almost always trash, and it takes re-doing the work and then claiming that it came from LLMs for it not to be.

These tools are being sold to me as similar to what a Jr engineer offers me, but that's not at all true because I would fire a Jr that came to me with such bullshit and needing such significant hand-holding so often as what I see coming out of an LLM.


So if anyone with below 120 iq gives you their opinion is disrespectful because they are stupid?

—-

It’s interesting that we have to respect human “stupid” opinions but anything from AI is discarded immediately.

I’d advocate for respecting any opinion. And consider good or at least good willed opinion.


Of course I respect humans, I am a human myself! And I learned a lot from others, asking them (occasionally stupid) questions and listening to their explanations. Doing the same to other is just being fair. Explain a thing and make someone more knowledgeable! Maybe next time _they_ will help you!

This does not apply to AI of course. In most cases, if a person did an AI PR/comment once, they will keep doing AI PRs/comments, so your explanation will be forgotten next time they clear context. Might as well not waste your time and dismiss it right away.


[flagged]


The same, you say?

What was the reason before and what’s the reason now?

Seems like it’s “ours” superiority.


Congratulations on misunderstanding and misrepresenting the point. (This is sarcasm, btw.)

It’s not the source that matters. It’s not the source that he’s complaining about. It’s the nature of the interaction with the source.

I’m not against watching video, but I won’t watch TikTok videos, because they are done in a way that is dangerously addictive. The nature of engagement with TikTok is the issue, not “I can’t learn from electrical devices.”

Each of us must beware of the side effects of using tools. Each kind of tool has its hazards.


What is this new breed of interactive books that give you half baked opinions and incorrect facts in response to a prompt?

It's called a "scam". You're welcome.

Yeah except it's not quite the same thing, is it?

The fact that you're presenting this as a comically absurd comparison tells me that you know well that it's an absurd comparison.


At least you can counter with an argument. You just seem to agree both are absurd.

Nah, I thought OP was spot on. A book isn't in the same class of things as an automated bullshit generator.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: