If someone hasn't written a blog titled "Should we be worried about Cloudflare?" yet, I think it would be a good subject to explore. I find the idea that they could decide one day to ban you from all of their network pretty worrying. And if they did, how much fingerprinting are they doing and would the bad extend far beyond just a random IP address.
My theory is that when an organization descends into a cycle of repetitive downsizings, it inevitably leads to people focusing more on protecting their jobs than on business value.
By the process of natural selection, the people who survive such rounds of firings are the ones with the political skills to survive them. Some of those guys will be star performers, but most will simply be politicians. Over time, this process will result in an increase in the ratio of politicians to star performers.
My company’s been through layoffs/reorgs every 3–6 months for three years. One thing is true: performance management happens faster. Many chronic low performers were laid off, and a few “too many cooks” problems were resolved. Those benefits are real and genuine.
But it’s a mistake to assume the remainder is automatically high‑performer‑only. Three patterns I’ve seen:
1) People with options leave first. If you can find a stable, supportive org at similar pay, you go. That’s often your top performers. We've lost some truly amazing people who left because they were simply not willing to tolerate working here anymore. Being absolutely ruthless in getting rid of low performers is honestly not worth it when you also lose the people who truly move the needle on creating new products, etc. If you make a mistake and get rid of some people who were talented high-performers, trust is instantly gone. The remaining high-performers now know that they may also be a target, and so they won't trust you and they'll leave whenever they can. And when you're axing 10k+ people, you're absolutely going to make mistakes.
2) The survivors change. Trust and empathy plummet. Incentives tilt toward optics and defensiveness, and managers start competing on visible ruthlessness. You can keep the lights on, but actually trying to innovate in this environment is too scary and risky.
3) In an atmosphere of fear, people who are willing to be genuinely dishonest and manipulative -- and who are good enough at it to get away with it -- have a serious competitive advantage. I've seen good, compassionate leaders go from a healthy willingness to make tough decisions on occasion to basically acting like complete psychopaths. Needless to say, that's extremely corrosive to meaningful output. While you could argue that skillful dishonesty is an individual advantage regardless of climate, an environment of repeat layoffs strongly incentivizes this behavior by reducing empathy, rewarding "do whatever it takes to win" behavior, etc.
Your comment made wonder if there is an social / economic phenomenon tied to your characterization. I'd be really curious if there is any academic work done on further elucidating it.
Edit: Did some research with ChatGPT and found the following papers if anyone else is interested in the above concepts.
At companies where decimation is a given... IME 3 or variations of 3 predominantly are already in play.
The most nefarious kind I saw was to use tenure capital towards influencing peers (above and below) into over-engineering complexity to improve longevity (or simply to flex on the basis of tenure) and this is a game-able closed loop. The longer one has been in a position is in a position to stay even longer via influence and no-one questions.
The up-levels explain this as "trust" which probably is slop/laziness or pure lack of time due to how busy the up-levels are managing up the chain (and working towards their own longevity)
The below-levels probably are afraid to question/oppose strongly due to obvious reasons. This becomes worse if the tenured person in question is already a "celebrated hero" or "10x-er".
Unless you are in the position of Zuckerberg or the Google founders, everyone’s main motivation is to keep their jobs.
If you are just an employee - it’s beyond naive to prioritize anything but your own job. Of course I am not advocating back stabbing and throwing others under the bus. Getting ahead in BigTech is always about playing politics and getting on the right projects to show “impact”.
It's a matter of it being an active vs passive effort to "keep your job." Once you have to actively worry about the status of your job something is wrong (be it with you or your employer).
First employees work for the customer, then they work for the company, then they work for themselves. Sounds like Amazon is between the 2nd and 3rd stage.
I honestly wonder if there is safety in the herd here. If you have a dedicated server in a rack somewhere that goes down and takes your site with it. Or even the whole data center has connectivity issues. As far as the customer is concerned, you screwed up.
If you are on AWS and AWS goes down, that's covered in the news as a bunch of billion dollar companies were also down. Customer probably gives you a pass.
> If you are on AWS and AWS goes down, that's covered in the news as a bunch of billion dollar companies were also down. Customer probably gives you a pass.
Exactly - I've had clients say, "We'll pay for hot standbys in the same region, but not in another region. If an entire AWS region goes down, it'll be in the news, and our customers will understand, because we won't be their only service provider that goes down, and our clients might even be down themselves."
My guess is their infrastructure is set up through clickops, making it extra painful to redeploy in another region. Even if everything is set up through CloudFormation, there's probably umpteen consumers of APIs that have their region hardwired in. By the time you get that all sorted, the region is likely to be back up.
You can take advantage by having an unplanned service window every time a large cloud provider goes down. Then tell your client that you where the reason why AWS went down.
Yeah, but not just that. I don't expect my mum to go find some high end consumer GPU and install it on a home server in order to run her own local LLM. I expect that people will be throwing chat interfaces running remixed versions of open weight models out on the internet so fast that it's impossible for anyone to monitise it in a reasonable way.
I also wonder whether, similar to bitcoin mining, these things end up on specialist ASICS and before we know it a medium tier mobile phone is running your own local models.
I don't have a newer iphone to run it on, so I don’t have a specific one to recommend but searching for local LLM in the App Store gives plenty of options so you’re not limited to Apple Intelligence models.
I don't think that AGI is necessary for LLMs to be revolutionary. I personally use various AI products more than I use Google search these days. Google became the biggest company in the world based on selling advertising on its search engine.
Yeah, the incentives there are obviously misaligned. I wonder if there is a potential way of making advertising click through tracking following the "I cut the cake, you chose the slice" model.
Some countries have property taxes where you declare the value and the government retains the right to purchase the property for that value for example.
My first thought was to make the advertising cost driven by revenue on the site. But that just reverses the incentive.
People will just pull ads if the ROAS isn't there. Performance marketing teams aren't fools.
Altering data would mess with everything. Why is unverified traffic increasing? What's wrong with new marketing efforts? Marketing just requires fixed definitions. e.g. if you have 97% bots but it remains constant that's okay. I know I am spending $x to get $y conversions. I can plan over time, increase or decrease and I can plan. I won't be willing to pay as much as with 0% bots (will pay far far less) but I can set my strategy on this.
It's not that it's x% bots that is the problem. Growth team doesn't adjust strategy on percentage-bot. Growth team adjusts strategy based on return on ad spend. If 0% bots but no return, way worse than 5x ROAS with 99% bots.
In others you'd want, say, auditing or independent third-party verification.
In this case, perhaps an audit involving the deliberate injection of a mix of legitimate and bot traffic to see how much of the bot traffic was accurately detected by the ad platform. Rates on total traffic could be adjusted accordingly.
This of course leads to more complications, including detection of trial interactions, see e.g., the 2010 VW diesel emissions scandal: <https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal>, or current AIs which can successfully identify when they're being tested.
On further reflection: I'd thought of raising the question of which situations cut/choose does work. Generally it seems to be where an allocation-division decision is being made, and the allocation is largely simultaneous with the division, with both parties having equal information as to value. Or so it seems to me, though I think the question's worth thinking about more thoroughly.
That's a subset of multi-party decisionmaking situations, though it's a useful one to keep in mind.
I vaguely remember someone winning a noble prize for economics for coming up with ways to apply cut/choose in financial transactions but I couldn't find it in a quick Google. It may have been nearly 20 years ago though.
Basic ad ops has ad buyers buy ads from different vendors, track which converts (attribution, which has flaws, but generally is decent signal), and allocate spend via return on ad spend. So it hurts the vendor at least as much as the buyer by inflating the cost per action / damaging roas.
I've seen people but cpc campaigns and only place ads that don't convert. So they get the benefit of the branding instead.
I guess more modern auction algorithms factor this in
I've had good results with doing similar. My spelling and grammar have always been a challenge and, even when I put the effort into checking something, I get blind to things like repeating words or phases when I try to restructure sentences.
I sometimes also ask for justification of why I should change something which I hope, longer term, rubs off and helps me improve on my own.
I think, if you take jacquesm's posting history here, into consideration it was probably a joke. Maybe not his best work but I don't think he was serious.
It's right behind our office in Holborn. I walk past it often but somehow haven't quite built up the bravery to walk in. Despite having seen most of Tim's work on YouTube...
I'll do that! Although I have to say you've made me chuckle. I can totally imagine some pithy arcade game poking fun at how our industry is quickly moving from writing code to middle management of a group of AI agents.
We used to use it for coding interviews during COVID. It never struck me as anything special and I'm not convinced saddling it with AI improvements it's prospects. In fact it might make them worse given it's general applicability to learning which will surely be surpressed by the worries of an AI takeover.
reply