Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> while quoting an HR executive at a Fortune 100 company griping: "All of these copilots are supposed to make work more efficient with fewer people, but my business leaders are also saying they can't reduce head count yet."

I'm surprised McKinsey convinced someone to say the quiet part out loud



I find it all quite strange:

- AI companies of course will try and sell you that you can reduce headcount with AI

- CEO's will parrot this talking point without ever talking a closer look.

- Everyone lower down on the org chart minus the engineers are wondering why the change hasn't started yet.

- Meanwhile engineers are ripping their hair out cause they know that AI in it's current state will likely not replace any workers.

Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".


The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization.

No one wants to say on their resume, "I manage 5 people, but trust me, with AI, its like managing 20 people!"

Managers also don't pay people's salaries. The Tech Tools budget is a different budget than People salaries.

Also keep in mind, for any problem space, there is an unlimited number of things to do. 20 people working 20% more efficiently wont reach infinity any faster than 10 people.


> The incentive structure for managers (and literally everyone up the chain) is to maximize headcount. More people you managed, the more power you have within the organization

Ding ding ding!

AI can absolutely reduce headcount. It already could 2 years ago, when we were just getting started. At the time I worked at a company that did just that, succesfully automating away thousands of jobs which couldn't pre-LLMs. The reason it ""worked"" was because it was outsourced headcount, so there was very limited political incentive to keep them if they were replaceable.

The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it. This is by far the biggest cause of all these "failed" AI projects. It's super obvious when you start noticing that for jobs that were being outsourced, or done by temp/contracted workers, those are much more rapidly being replaced. As well as the fact that tech startups are hiring much less than before. Not talking about YC-and-co startups here, those are global exceptions indeed affected a lot by ZIRP and what not. I'm talking about the 99.9% of startups that don't get big VC funds.

A lot of the narrative on HN that it isn't happening and AI is all a scam is IMO out of reasonable fear.

If you're still not convinced, think about it this way. Before LLMs were a thing, if I asked you what the success rate of software projects at non-tech companies was, what would you have said? 90% failure rate? To my knowledge, the numbers are indeed close. And what's the biggest reason? Almost never "this problem cannot be technically solved". You'd probably name other, more common reasons.

Why would this be any different for AI? Why would those same reasons suddenly disappear? They don't. All the politics, all the enterprise salesmen, the lack of understanding of actual needs, the personal KPIs to hit - they're all still there. And the politics are even worse than with trad. enterprise software now that the premise of headcount reduction looms larger than ever.


Yes, and it’s instructive to see how automation has reduced head count in oil and gas majors. The reduction comes when there’s a shock financially or economically and layoffs are needed for survival. Until then, head count will be stable.

Trucks in the oil sands can already operate autonomously in controlled mining sites, but wide adoption is happening slowly, waiting for driver turnover and equipment replacement cycles.


> The bigger and older the company, the more ossified the structures are that have a want to keep headcount equal, and ideally grow it.

I don't know, most of the companies doing regular layoffs wheneveer they can get away with it are pretty big and old. Be it in tech - IBM/Meta/Google/Microsoft, or in physical things - car manufacturers, shipyards, etc.


Through top-down, hard mandates directly by the exec level, absolutely! They're an unstoppable force, beating those incentives.

The execs aren't the ones directly choosing, overseeing and implementing these AI efforts - or in the preceding decades, the software efforts. 9 out of 10 times, they know very little about the details. They may ""spearhead"" it in so far that's possible, but there's tonnes of layers inbetween with their own incentives which are required to cooperate to actually make it work.

If the execs say "Whole office full-time RTO from next month 5 days a week", they really don't depend on those layers at all, as it's suicide for anyone to just ignore it or even fake it.


> At the time I worked at a company that did just that, succesfully automating away thousands of jobs which couldn't pre-LLMs.

which company is this? surely they wouldve made a big splash for doing something no one else has been able to do.


Did you not see the backlash the Duolingo CEO got and how hard he backtracked? Coming out and saying "We're replacing a big bunch of people with LLMs" is about the worst PR you can get in 2025, it's really an wful idea for anyone but maybe pure B2B companies that are barely hanging on and super desperate for investor cash.

This was a big, traditional non-tech company.

Also as implied, these were cheap offshore contracting jobs being replaced. Still magnitudes more expensive than LLMs, making it very "worth it" from a company perspective. But not prime earnings call material.

Everyone in the industry also knows that it's not particularly unique, far away from something no one has been able to do. Go look at the job markets for translation, data entry, customer support compared to 2 years ago. And as mentioned, even junior web devs.


Maybe 40 years ago or in some cultures, but I've always focused on $ / person. If we have a smaller team that can generate $2M in ARR per developer that's far superior to $200K. The problem is once you have 20 people doing the job nobody thinks it's possible to do it with 10. You're right that "there is an unlimited number of things to do" and there's really obvious things that must be done and must not be done, but the majority IME are should or could be done, and in every org I've experienced it's a challenge to constrain the # of parallel initiatives, which is the necessary first step to reducing active headcount.


Exactly, it’s much easier with a new organization.

In my previous company, we would speculate about where to use AI and we were never sure.

In the new company we use AI for everything and produce more with substantially fewer people


Do you have any examples of the types of tasks you’ve found the most success with using ai ?


we use AI (LLMs) to improve the recall and precision of our classification models for content moderation. Our human moderators can only process so many items per day, at a high cost.

AI (LLMS) act as a pre-filter, auto-approving or auto-rejecting before they get to the humans for review.


Does anyone want what you're producing though?

I don't mean to be dismissive and crappy right out of the gate with that question, I'm merely drawing on my experience with AI and the broader trends I see emerging: AI is leveraged when you need knowledge products for the sake of having products, not when they're particularly for something. I've noticed a very strange phenomenon where middle managers will generate long, meandering report emails to communicate what is, frankly, not complicated or terribly deep information, and send them to other people, who then paradoxically use AI to summarize those emails, likely into something quite similar to what was prompted to be generated in the first place.

I've also noticed it being leveraged heavily in spaces where a product existing, like a news release, article, social media post, etc. is in itself the point, and the quality of it is a highly secondary notion.

This has led me to conclude that AI is best leveraged in such cases where nobody including the creator of a given thing really... cares much what the thing is, if it's good, or does it's job well? It exists because it should exist and it's existence performs the function far more than anything to do with the actual thing that exists.

And in my organization at least, our "cultural opinion" on such things would be... well if nobody cares what it says, and nobody is actually reading it... then why the hell are we generating it and then summarizing it? Just skip the whole damn thing and send a short, list email of what needs communicating and be done.


I've spent hours each week on Sora 2 and ChatGPT. I clearly have been enjoying what AI has offered. ChatGPT has largely replaced my google searching.


> Does anyone want what you're producing though?

He's either lying or hard-selling. The company in his profile "neofactory.ai" says they "will build our first production line in Dallas, TX in Q3." well, we just entered Q4, so not that. Despite that it has no mentions online and the website is just a "contact us" form.


The anthropologist David Graeber wrote a book called "Bullshit Jobs" that explored the subject. It shouldn't be surprising that a prodigious bullshit generator could find a use in those roles.


> for any problem space, there is an unlimited number of things to do.

That's what I've wondered. We don't just run out of work, products, features, etc. We can just build more but so can the competition right?


I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites, and deliver approximately same value. It would provide additional benefits, in that AIs refuse to do unethical things and generally reason acceptably well. The cost cutting would be immense!

I am not kidding. In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki. The current AIs are perfect for this.


Not a crazy idea. Sergey at Google said it's best at replacing managers fwiw


why isn't he doing it then?


> I am still of the conviction that "reducing employee head count" with AI should start at the top of the org chart. The current iterations of AI already talk like the C-suites

That is exactly what it can't do. We need someone to hold liable in key decisions.


Right, because one really widely-known fact about CEOs is that whenever anything goes wrong at a company, they take the full blame, and if it's criminal, they go to jail!

....hey, wait a sec....


"Wow this AI both writes and reads email? That's about 90 percent of my job and -- I presume -- 90 percent of what happens around here!"


AND it can sit meetings all day and not forget any decisions; that's the other 90 percent of a manager's day.


And just like senior managers, every time you ask it a question, it starts a new context.


Can it turn simple yes-or-no questions, or "hey who's the person I need to ask about X?" into scheduled phone calls that inexplicably invite two or three other people as an excuse to fill up its calendar so it looks very busy?


It's not the top IME, but the big fat middle of the org chart (company age seems to mirror physical age maybe?) where middle to senior managers can hide out, deliver little demonstratable value and ride with the tides. Some of these people are far better at surfing the waves than they are at performing the tasks of their job title, and they will outlast you, both your political skills and your tolerance for BS.


One could argue that they deliver a better value than meat leaders.


> In any large corps, the decision makers refuse to take any risks, show no creativity, move as a flock with other orgs, and stay middle-of-the-road, boring, beige khaki.

It's hard to take this sentiment seriously from a source that doesn't have direct experience with the c-suite. The average person only gets to see the "public relations" view of the c-suite (mostly the CEO) so I can certainly see why a "LLM based mouthpiece" might be better.

The c-suite is involved in thousands of decisions that 90% of the rest of the world is not privy to.

FWIW - As a consumer, I'm highly critical of the robotic-like external personas the c-suite take on so I can appreciate the sentiment, but it's simply not rooted in any real experience.


AI is most capable of replacing the humans who have the power to decide or influence the choice to replace humans with AI.

But managers will not obsolete themselves.

So right now AI should be used to monitor and analyze the workforce and find the efficiency that can be achieved with AI.


> AI in its current state will likely not replace any workers.

This is a puzzling assertion to me. Hasn’t even the cheapest Copilot subscription arguably replaced most of the headcount that we used to have of junior new-grad developers? And the Zendesks of the world have been selling AI products for years now that reduce L1 support headcount, and quite effectively too since the main job of L1 support is/was shooting people links to FAQs or KB articles or asking them to try restarting their computer.


> Pretty soon we will have articles like "That time that CEO's thought that AI could replace workers".

Yup, it's just the latest management fad. Remember Six Sigma? Or Agile (in its full-blown cultish form; some aspects can be mildly useful)? Or matrix management? Business leaders, as a class, seem almost uniquely susceptible to fads. There is always _some_ magic which is going to radically increase productivity, if everyone just believes hard enough.


I was working with a team on a pretty simple AI solution we were adding to our larger product. Every time we talk to someone we're telling them "still need a human to validate this..."


> ripping their hair out

I mean, nah, we've seen enough to these cycles to know exactly how this will end.. with a sigh and a whimper and the Next Big Thing taking the spotlight. After all, where are all the articles about how "that time that CEOs thought blockchain could replace databases" etc?


Also strange that this executive is worried about how the business continues to function after the people are gone. That's not the McKinsey Way!


I think they can. IME LLMs have me working somewhat less and doing somewhat more. It's not a tidal wave but I'm stuck a little bit less on bugs and some things like regex or sql I'm much faster. It's something like 5-10% more productive. That level of slack is easy to take up by doing more but theoretically it means being able to lose 1 out of every 10-20 devs.


How does it make sense to trade one group of labor (human) who are generally loosely connected, having little collective power for another (AI)? What you're really doing isn't making work more "efficient", you're just outsourcing work to another party -- one who you have very little control over. A party that is very well capitalized, who is probably interested in taking more and more of your margin once they figure out how your business works (and that's going to be really easy because you help them train AI models to do your business).


It’s the same as robots in a factory.


Except that the people that make robots for factories, aren't interested in making whatever that factory is making.


That's not required. All that is required is becoming a sole source of labor, or a source that is the only realistic choice economically.

If you ask me, that's the real long game on AI. That is exactly why all these billionaires keep pouring money in. They know it's the only way to continue growth is to start taking over large sections of the economy.


Yes, that's the difference between robot makers (tool makers for others) and AI, which is not only trying to be a tool for other companies, but also take over their businesses, by acquiring their knowledge and then use a combination of capture through lack of visibility and (mis-)use of the information gathered to directly compete.

Classic enshittification combined with embedding internally to company operations to become indispensable.


both make a lot of sense, but the biggest mistake they make is to see people as capacity, or as a counter.

Each human can be a bit more productive, I fully believe 10-15% is possible with today's tools if we do it right. But each human has it unique set of experience and knowledge. If I do my job a bit faster, and you do your job a bit faster. But if we are a team of 10, and we do all our job 10% faster, doesn't mean you can let one of us go. It just means, we all do our job 10% faster, which we probably waste by drinking more coffee or taking longer lunch breaks


Organizations that successfully adapt are those that use new technology to empower their existing workers to become more productive. Organizations looking to replace humans with robots are run by idiots and they will fail.


This part was never quiet...

The quiet part out loud phrase is overused.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: