I wonder if this is related to the agency problem[1] and the rise of short-sightedness from the ruling class.
If you're just trying to make as much money as possible this quarter and have no real care about building long-term value, why wouldn't you put agents in that mercilessly generate money at the expense of things like your brand and people?
I also wonder how many of the authors of the piece are at public vs private companies.
The Professional Managerial Class (college -> management being the norm) gained a lot of steam in the '80s and had basically taken over the entire economy by the end of the '90s. My dad's career spanned the pre- and post-transition eras, with the latter coming as a very sudden shift due to a large merger. His description of the difference was... not flattering to the modern notion. Way, way more wasted time. Way more business trips that could have been an email (but how would the managers get to go party away from the family otherwise?). Lots more clueless management who don't understand WTF the business actually does or how any of it works, resulting in braindead leadership.
Deep professional understanding of a problem space that a business solves is way undervalued. Institutional knowledge, experience, and domain expertise have been devalued precisely because the managerial class (particularly executives and VPs) actively learn and live the idea that labor is always bad and to be minimized as much as possible.
This is what the AI boom is really about, removing more power from labor. Its why all the AI hype largely markets itself in this way "how AI can replace or minimize X role" as opposed to "This is how you can use AI to empower your workforce in the majority of discourse I've seen around it.
> This is what the AI boom is really about, removing more power from labor. Its why all the AI hype largely markets itself in this way "how AI can replace or minimize X role" as opposed to "This is how you can use AI to empower your workforce in the majority of discourse I've seen around it.
Arguably, AI is largely marketed that way because that's what corporate buyers care about, the same way every productivity improving invention has been marketed to corporate buyers even if a major actual effect is increasing the value of each labor hour and driving wages up. (Which is largely isomorphic to reducing the number X role needed in the production of Y units of a good or service.)
Its also sold as a labor productivity increase to independent creators. And the two things are, after all, different sides of the same coin.
No, he wrote that it was marketed that way because that is what the “AI boom is really about”, in opposition to something else, which I also discuss in the post you excerpted this from. Not sure if you didn’t read the whole post and just kneejerk reacted to the first part of the first sentence out of context, or if you just didn’t understand how it sharply differs from the claims in the post it responds to.
What is it really about, in contrast to what I assert? I'm looking at how its being implemented, talked about, thought about, introduced.
I'm happy to re-evaluate my stance in the light of better evidence, but the AI adoption has corresponded to alot of CEOs announcing layoffs with a simultaneous doubling down on AI tools to replace those now displaced workers or those LinkedIn stories from people saying how they will never have to hire X or Y because AI will do it / does it.
> Professional Managerial Class (college -> management being the norm)
This isn't the norm in most STEM industries anymore.
Most of us started off as IC-level engineers before either beung given progressively more responsibility and/or being sponsored by our employees to participate in a PTMBA like Wharton, Booth, Fuqua, or Haas.
Networking and hustling did ofc play a role, but lacking domain experience would limit how high you could climb.
When I was in the policymaking world and was considering grad school/academia, an underlying theme in my research was that the principal-agent problem is a reflection of misaligned incentives which leads a stag-hunt dynamic to become a Nash Equilibrium.
Long story short, incentives matter, and understanding how to align your initiatives with the incentives of veto players helps build coalitions that you need to get initiatives out the door. That said, these initiatives also need to be executed successfully, becuase organizational dynamics are inherently multi-agent games.
Essentially, I made sure to understand how to speak (ie. Understand the incentive structures) of multiple stakeholders (eg. How to convince Mgmt and IC Engineers, PMs, salespeople, customer success, and customers) and also how to execute successfully on initiatives (ie. How to successfully launch products, lead a round, land customers, or manage an M&A event).
This meant both building domain knowledge about each of the stakeholders fields as well as building domain knowledge in a handful of fields I knew I could specialize in.
Basically, understanding incentive structures and being able to show how your interests and goals align with those incentives is critical.
For example, back when I was an IC level engineer, if I wanted to get tech debt prioritized, I made sure to:
1. Show that it was tied to active issues to customers that matter - eg. fixing a bug for a customer who spends $20k a year at a company generating $100M a year in revenue is a misallocation of resources for EMs and PMs
2. Show that it is tied to speeding up feature delivery: it converts a conversation around "maintenance" into a conversation around adding new capabilities that are assumed to generate revenue, thus aligning Sales, PM, and Leadership
A lot of people on HN neurotically and reflexively don't care to understand how organizations work or how to make a case. A number of them assume that just because it's a technical problem it should actually matter to the top line of a business. In most cases, it does not if you cannot make a case for it. A number of them also don't care to leave a bad organization if they are in one (I have worked in 2 in my career, and made sure to leave).
I have no MBA, I just have an undergrad CS degree (and a secondary in Government). Even though my current day job doesn't demand it, I can still code, but I also taught myself how to do basic FP&A, marketing, user experience research, and other functions. If you want to survive and thrive in the tech industry, nowadays you will need to build industry specific domain experience, technology specific domain experience, and basic product management, sales, and user experience chops.
interesting the article mentions his speeds dropped. I wonder if someone at support knows this but knows there's nothing they can really do to fix the issues?
I think it makes sense in the same way we blot out our awareness of 90% of the external stimuli -- There is just too much of it.
We have to choose what to 'deal with' and our capacity for that and awareness of it can change over time.
I also think this goes along with the author's concept of you're not trying since you can kind of snap into awareness and then just do those things sometimes.
If you're just trying to make as much money as possible this quarter and have no real care about building long-term value, why wouldn't you put agents in that mercilessly generate money at the expense of things like your brand and people?
I also wonder how many of the authors of the piece are at public vs private companies.
1: https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...
reply