I think in general the term innovation is perceived as something new that results in a meaningful improvement. Change for the sake of change that brings no real benefits fails to pass this bar.
For example, if someone released a new car model with exterior made of wood, it'd generate a lot of buzz, but unlikely to be considered an innovation.
A lot of Cybertruck features fall into this category of being novel, but not necessarily innovative.
I don't agree that this is surprising. To be "dominant" in this space means more than raw performance or value. One must also dominate the details. It has taken AMD a long time to iron out a large number of these details, including drivers, firmware, chipsets and other matters, to reach real parity with Intel.
The good news is that AMD has, finally, mostly achieved that, and in some ways they are now superior. But that has taken time: far longer than it took AMD to beat Intel at benchmarks.
One thing to remember is that the enterprise space is very conservative: AMD needed to have server-grade CPUs for all of the security and management features on the market long enough for the vendors to certify them, promise support periods, etc. and they need to get the enterprise software vendors to commit as well.
The public clouds help a lot here by trivializing testing and locking in enough volume to get all of the basic stuff supported, and I think that’s why AMD was more successful now than they were in the Opteron era.
Server companies have long term agreements in place...waiting for those to expire before moving to AMD is not unexpected. This was the final outcome expected by many.
Intel did an amazing job at holding on to what they had. From Enterprise Sales connection which AMD had very little from 2017 to 2020. And then bundling other items, essentially discount without lowering price, and finally some heavy discount.
On the other hand AMD has been very conservative with their EPYC sales and forecast.
Servers are used for a long time and then Dell/HP/Lenovo/Supermicro has to deliver them and then customers have to buy them. This is a space with very long lead times. Not surprising.
first 2 gens of epic didn't sell that much compared to Intel because companies didn't want to make huge bets on AMD until there was more confidence that they would stick around near the top for a while. also server upgrade cycles are lengthening (probably more like 5-7 years now) since CPUs aren't gaining per core performance as quickly
Complicated. Performance per watt was better for Intel, which matters way more when you're running a large fleet. Doesn't matter so much for workstations or gamers, where all that matters is performance. Also, certification, enterprise management story, etc was not there.
Maybe recent EPYC had caught up? I haven't been following too closely since it hasn't mattered to me. But both companies were suggesting an AMD pass by.
Not surprising at all though, anyone who's been following roadmaps knew it was only a matter of time. AMD is /hungry/.
You're thinking strictly about core performance per watt. Intel has been offering a number of accelerators and other features that make perf/watt look at lot better when you can take advantage of them.
AMD is still going to win a lot of the time, but Intel is better than it seems.
That is true, but the accelerators are disabled in all cheap SKUs and they are enabled only in very expensive Xeons.
For most users it is like the accelerators do not exist, even if they increase the area and the cost of all Intel Xeon CPUs.
This market segmentation policy is exactly as stupid as the removal of AVX-512 from the Intel consumer CPUs.
All users hate market segmentation and it is an important reason for preferring AMD CPUs, which are differentiated only on quantitative features, like number of cores, clock frequency or cache size, not on qualitative features, like the Intel CPUs, for which you must deploy different program variants, depending on the cost of the CPU, which may provide or not provide the features required for running the program.
The Intel marketing has always hoped that by showing nice features available only in expensive SKUs they will trick the customers into spending more for the top models. However any wise customer has preferred to buy from the competition instead of choosing between cheap crippled SKUs and complete but too expensive SKUs.
I think Intel made a strategic mistake in recent years by segmenting its ISA variants. E.g., the many flavors of AVX-512.
Developers can barely be bothered to recompile their code for different ISA variants, let alone optimize it for each one.
So often we just build for 1-2 of the most common, baseline versions of an ISA.
Probably doesn't help that (IIRC) ELF executables for the x86-64 System V ABI have now way to indicate precisely which ISA variants they support. So it's not easy during program-loading time to notice if your going to have a problem with unsupported instructions.
(It's also a good argument for using open source software: you can compile it for your specific hardware target if you want to.)
Wise customers buy the thing that runs their workload with the lowest TCO, and for big customers on some specific workloads, Intel has the best TCO.
Market segmentation sucks, but people buying 10,000+ servers do not do it based on which vendor gives them better vibes. People seem to generally be buying a mix of vendors based on what they are good at.
Intel can offer a low TCO only for the big customers mentioned by you, who buy 10000+ servers and have the force to negotiate big discounts from Intel, buying the CPUs at prices several times lower that their list prices.
On the other hand, for any small businesses or individual users, who have no choice but to buy at the list prices or more, the TCO for the Intel server CPUs has become unacceptably bad. Before 2017, until the Broadwell Xeons, the TCO for the Intel server CPUs could be very good, even when bought at retail for a single server. However starting with the Skylake Server Xeons, the price for the non-crippled Xeon SKUs has increased so much that they have been no longer a good choice, except for the very big customers who buy them much cheaper than the official prices.
The fact that Intel must discount so much their server CPUs for the big customers is likely to explain a good part of their huge financial losses during the last quarters.
Intel does a lot of work developing sdks to take advantage of its extra CPU features and works with open source community to integrate them so they are actually used.
Their acceleration primitives work with many TLS implementations/nginx/SSH amongst many others.
But those accelerators are also available for AMD platforms - even if how they're provided is a bit different (often on add-in cards instead of a CPU "tile").
And things like the MI300A mean that isn't really a requirement now either.
QAT is an integrated offering by Intel, but there are competing products delivered as add-in cards for most of the things it does, and they have more market presence than QAT. As such, QAT provides much less advantage to Intel than Intel marketing makes it seem like. Because yes, Xeon (including QAT) is better than bare Epyc, but Epyc + third party accelerator beats it handily. Especially in cost, the appearance of QAT seems to have spooked the vendors and the prices came down a lot.
I've only used a couple QAT accelerators and I don't know that field much... What relatively-easy-to-use and not-super-expensive accelerators are available around?
Performance per Watt was lost by Intel with the introduction of the original Epyc in 2017. AMD overtook in outright performance with Zen 2 in 2019 and hasn't looked back.
idk go look at the xeon versus amd equivalent benchmarks. theyve been converging although amd's datacenter offerings were always a little behind their consumer
this is one of those things where there's a lot of money on the line, and people are willing to do the math.
the fact that it took this long should tell you everything you need to know about the reality of the situation
AMD has had the power efficiency crown in data center since Rome, released in 2019. And their data center CPUs became the best in the world years before they beat Intel in client.
The people who care deeply about power efficiency could and did do the math, and went with AMD. It is notable that they sell much better to the hyperscalers than they sell to small and medium businesses.
> idk go look at the xeon versus amd equivalent benchmarks.
They all show AMD with a strong lead in power efficiency for the past 5 years.
I know what the benchmarks are like, I wish that you would go and update your knowledge. If we take cloud as a comparison it's cheaper to use AMD, think they're doing some math?
If you stop doing a thing that has a positive effect on you, the positive effect is gone. Like, if you stop exercising, you lose muscle you've gained by exercising. Does that make exercising a bad thing?
I like how this artistic concept still includes animations on Windows logo freezing up during the boot process. It's such a staple of Windows at this point!
Does this also apply to Chrome-based browsers like Arc or Brave? Could they keep manifest v2 around on their own without relying on Google here?
Edit: I've googled around and it seems that yes, this would apply to all Chromium-based browsers, and all the clones are planning to solve the problem by rolling out their own adblocking, either native or ManifestV3-based.
This does not affect Brave entirely. It will affect MV2 extensions on Brave generally speaking, but it will not affect Brave's shields that are built in. See my response to another comment here: https://news.ycombinator.com/item?id=41181701
It will apply to all the other Chromium based browsers that are wrappers and not forks.
I think the hacker specifically targeted two players with largest audiences who were both live streaming for maximum lulz.
I also think that he specifically did it in a way that would minimize the risk of those players being accused of cheating cause he respects them and doesn't want to ruin their careers.
For example, if someone released a new car model with exterior made of wood, it'd generate a lot of buzz, but unlikely to be considered an innovation.
A lot of Cybertruck features fall into this category of being novel, but not necessarily innovative.