> Microsoft has a similar problem where nobody gets promoted from fixing bugs or maintaining stuff, everyone gets rewarded for new innovative [thing] so every two-three years there's a completely new UI framework or similar.
Is there any big (or even medium-sized) company where this isn't true? I feel like it's just a rule of corporate culture that flashy overpromising projects get you promoted and regularly doing important but mundane and hard-to-measure things gets you PIP'd.
It's a matter of letting things degrade so that the maintenance becomes outright firefighting. I am currently working on a project where a processing pipeline has a maximum practical throughput of 1x, and a median day's for said pipeline is... 0.95x. So any outage becomes unrecoverable. Getting that project approved 6 month from now would have been basically impossible. Right now, it's valued at a promotion-level difficulty instead.
At another job, at a financial firm I got a big bonus after I went live on November 28th with an upgrade that let a system 10x their max throughput, and scaled linearly instead of being completely stuck. at their 1x. Median number of requests per second received in dec 1st? 1.8x... the system would have failed under load, causing significant losses to the company.
Prevention is underrated, but firefighting heroics are so well regarded that sometimes it might even be worthwhile to be the arsonist
Intuitively, "fixing life-or-death disasterss is more visible and gets better rewards than preventing them" doesn't seem like it should be a unique problem of software engineering. Any engineering or technical discipline, executed as part of a large company, ought to have the potential for this particular dysfunction.
So I wonder: do the same dynamics appear in any non-software companies? If not, why not? If yes, have they already found a way to solve them?
Outside of software, people designing technology are engineers. Although by no means perfect, engineers generally have more ability to push back against bad technical decisions.
Engineers are also generally encultured into a professional culture that emphasizes disciplined engineering practices and technical excellence. On the other hand, modern software development culture actively discourages these traits. For example, taking the time to do design is labeled as "waterfall", YAGNI sentiment, opposition to algorithms interviews, opposition to "complicated" functional programming techniques, etc.
That's a very idealistic black-and-white view of the world.
A huge number of roles casually use the "engineer" moniker and a lot of people who actually have engineering degrees of some sort, even advanced degrees from top schools, are not licensed and don't necessarily follow rigid processes (e.g. structural analyses) on a day to day basis.
As someone who does have engineering degrees outside of software, I have zero problem with the software engineer term--at least for anyone who does have some education in basic principles and practices.
It's common to start constructing buildings before the design is even complete. And there can be huge "tech debt" disasters in civil engineering. Berlin Airport is one famous example.
I remember my very first day of studying engineering, the professor said: "Do you know the difference between an engineer and a doctor? When a doctor messes up, people die. When an engineer messes up LOTS of people die."
Yeah but if you had a release target of dec 15 and it crashed dec 1st and you could have brought it home by the 7th you would have been a bigger winner. Tragedy prevented is tragedy forgotten. No lessons were learned
I spent a few weeks migrating and then fixing a bunch of bugs in 20-year old Perl codebase (cyber security had their sights set on it). Basically used by a huge amount of people to record data for all kinds of processes at work.
Original developer is long gone. Me and another guy are two of the only people (we aren't a tech company) who can re-learn Perl, upgrade multiple versions of Linux/Apache/MySQL, make everything else work like Kerberos etc...
Or maybe I'm one of the only people dumb enough to take it on.
Either way, nobody will get so much as an attaboy at the next department meeting. But, they'll know who to go to the next time some other project is resurrected from the depths of hell and needs to be brought up to date.
It seems endemic, especially everywhere that's not a product company. I think it was mythical man month (maybe earlier) that pointed out the 90% of the cost of software is in maintenance, yet 50 years on this cost isn't accounted for in project planning.
Consultancies are by far the worst, a project is done and everyone moves on, yet the clients still expect quick fixes and the occasional added feature but there's no one familiar with the code base.
Developers don't help either, a lot move from green field to green field like locusts and never learn the lessons of maintaining something, so they make the same mistakes over and over again.
Facebook was pretty good about this on the infra teams. No, not perfect, but a lot better than the other big companies I was exposed to.
If anything, big companies are better about tech-debt squashing, and it's the little tiny companies and startups that are, on average, spending less time on it.
I think it is a bit tricky to get the incentives right ( since the bookkeeping people like to quantize everything). If you reward finding and fixing bugs too much - you might push developers to write more sloppy code in the first place. Because then those who loudly fix their own written mess gets promoted - and those who quietly write solid code gets overlooked.
Goodhart’s law at work, or “why you shouldn’t force information workers to chase after arbitrary metrics”. Basecamp has been famously just letting people do good work, on their terms, without KPIs.
I will preemptively agree that this isn’t possible everywhere; but if you create a good work environment where people don’t feel like puppets executing the PM’s vision, they might actually care and want to do a solid day’s work (which we’re wired for).
Is it only big companies? The fact that many companies in our industry need to do "bug squash" events because we are unable to prioritize bugs properly speaks books to meet.
Top down decision making, typically by non-technical people who often have no idea what software development even involves.
Eventually things get so bad that there's no choice but to abandon feature work to fix them.
The business loses out multiple times. Feature work slows down as developers are forced to waste time finding workarounds for debt and bugs. The improvements/fixes take more time than they would have due to layers of crap being piled on top, and the event that forces a clean up generally has financial or reputational consequence.
Collaborative decision making is the only way around this. Most engineers understand that improvements must be balanced with feature work.
I find it very strange that the industry operates in the way it does. Where the people with the most knowledge of the requirements and repercussions are so often stripped of any decision making power.
This is pretty much a universal thing--whether it's software development or home maintenance. It's really tempting to kick the can down the road to the point where 1.) You HAVE to do something; 2.) It's not your problem any longer; or 3.) Something happens that the can doesn't matter any more.
I won't say procrastination is a virtue. But sometimes the deferred task really does cease to matter.
Is there any big (or even medium-sized) company where this isn't true? I feel like it's just a rule of corporate culture that flashy overpromising projects get you promoted and regularly doing important but mundane and hard-to-measure things gets you PIP'd.