Hacker Newsnew | past | comments | ask | show | jobs | submit | Aissen's commentslogin

During the mid 2000, an experimentation in the Montparnasse metro station in Paris transformed a moving sidewalk in order to have an acceleration ramp from 3 to 9km/h. It was slower(most of the time) than the 1900 expo's 10km/h. And there always was a "slower" sidewalk (3km/h, the default) next to it. The goal was to go up to 11km/h (it did at some point). And yet it failed, and was removed 15 years ago. Only the slow options remain.

https://fr.wikipedia.org/wiki/Trottoir_roulant_rapide#/media...


You can see it in action here: https://youtu.be/FBzlEG3tMuw?t=399

You need 100 servers. Now you need to only buy 99. Multiply that by a million, and the economies of scale really matter.

1% is less than the difference between negotiating with a hangover or not.

What a strange comparison.

If you're negotiating deals worth billions of dollars, or even just millions, I'd strongly suggest not doing so with a hangover.


> If you're negotiating deals worth billions of dollars, or even just millions, I'd strongly suggest not doing so with a hangover.

...have you met salespeople? Buying lap dances is a legitimate business expense for them. You'd be surprised how much personal rapport matters and facts don't.

In all fairness, I only know about 8 and 9 figure deals, maybe at 10 and 11 salespeople grow ethics...


I strongly suspect ethics are inversely proportional to the size of the deal.

That's more an indictment of sales culture than a critique of computational efficiency.

Well sure, because you want the person trying buy something from you for a million dollars to have a hangover.

Sounds like someone never read Sun Tzu.

(Not really, I just know somewhere out there is a LinkedInLunatic who has a Business Philosophy based on being hungover.)


Appear drunk when you are sober, and sober when you are drunk

- Sun Zoo


A few hours ago, just a few comments: https://news.ycombinator.com/item?id=45642051


If you email the mods they’ll merge the duplicate discussions. Footer contact link.


Why wouldn't this be just the result of a different seed? Is Gemini behaving deterministically by default?


As an infrastructure engineer (amongst other things), hard disagree here. I realize you might be joking, but a bit of context here: a big chunk of the success of Cloud in more traditional organizations is the agility that comes with it: (almost) no need to ask permission to anyone, ownership of your resources, etc. There is no reason that baremetal shouldn't provide the same customer-oriented service, at least for the low-level IaaS, give-me-a-VM-now needs. I'd even argue this type of self-service (and accounting!) should be done by any team providing internal software services.


The permissions and ownership part has little to do with the infrastructure – in fact I've often found it more difficult to get permissions and access to resources in cloud-heavy orgs.


This could be due to the bureaucratic parts of the company being too slow initially to gain influence over cloud administration, which results in teams and projects that use the cloud being less hindered by bureaucracy. As cloud is more widely adopted, this advantage starts to disappear. However, there are still certain things like automatic scaling where it still holds the advantage (compared to requesting the deployment of additional hardware resources on premises).


I think also this was only a temporary situation caused by the IT departments in these organisations being essentially bypassed. Once it became a big important thing then they have basically started to take control of it and you get the same problems (in fact potentially more so because the expense means there's more pressure cut down resources).


"No need to ask permission" and "You get the same bill every month" kinda work against one another here.


I should have been more precise… Many sub-orgs have budget freedom to do their job, and not having to go through a central authority to get hardware is often a feature. Hence why Cloud works so well in non-regulatory heavy traditional orgs: budget owner can just accept the risks and let the people do the work. My comment was more of a warning to would-be infrastructure people: they absolutely need to be customer-focused, and build automation from the start.


I'm at a startup and I don't have access to the terraform repo :( and console is locked down ofc.


don't underestimate the ability of traditional organisations to build that process around cloud

you keep the usual BS to get hardware, plus now it's 10x more expensive and requires 5x the engineering!


This is my experience, though the lead time for 'new hardware' on cloud is only 6-12 weeks of political knife fighting instead of 6-18 months of that plus waiting.


That's a cultural issue. Initially at my workplace people needed to ask permissions to deploy their code. The team approving the deployment got sick of it and built a self-service deployment tool with security controls built in and now deployment is easy. All it matters is a culture of trusting other fellow employees, a culture of automating, and a culture of valuing internal users.


Agreed, that's exactly what I was aiming at. I'm not saying that it's the only advantage of Cloud, but that orgs with a dysfunctional resource-access culture were a fertile ground for cloud deployments.

Basically: some managers gets fed-up with weeks/months of delays for baremetal or VM access -> takes risks and gets cloud services -> successful projects in less time -> gets promoted -> more cloud in the org.


Well ye it is more like I frame it as a joke but I do mean it.

I don't argue there aren't special cases for using fancy cloud vendors, though. But classical datacentre rentals get you almost always there for less.

Personally I like being able to touch and hear the computers I use.


> no need to ask permission to anyone, ownership of your resources, etc

In a large enough org that experience doesn’t happen though - you have to go through and understand how the org’s infra-as-code repo works, where to make your change, and get approval for that.


You also need to get budget, few months earlier, sometimes even legal approval. Then you have security rules, „preferred” services and the list goes on..



RIP Mugen and Bid For Power. Don't forget to make backups of those fan games!


I've still got burned CDs of some of the Bid For Power alphas downloaded from KaZaA (I think?).

My memory was that despite the enormous time-sink required to get it working, it actually wasn't very good. Like once you get over that initial thrill of being able to levitate in 3D space and blast a crude Kamehameha at people, the rest of the experience was pretty clunky.


It definitely lacked balance and game polish. But it was fun for fans, and unheard of at the time (a bit less so today).


It does not matter if you lose control of the number, the new person will be able to register. The 7 days period is for you to get control of the number back or make sure all your contacts know about the issue.


You conveniently side-stepped the argument that YouTube already knows how to serve DRM-ized videos, and it's widely deployed in its Movies & TV offering, available on the web and other clients. They chose not to escalate on all videos, probably for multiple reasons. It's credible that one reason could be that it wants the downloaders to keep working; they wouldn't want those to suddenly gain the ability to download DRM-ized videos (software that does this exist but it's not as well maintained and circulated).


It seems more credible to me that they would cut off a sizable portion of their viewers by forcing widevine DRM.

Or is it something different you are thinking about?

What benefits does DRM even provide for public, ad-supported content that you don't need to log for in order to watch it?

Does DRM cryptography offer solutions against ad blocking, or downloading videos you have legitimate access to?

Sorry that I'm too lazy to research this, but I'd appreciate if you elaborate more on this.

And also, I think they're playing the long game and will be fine to put up a login wall and aggressively block scraping and also force ID. Like Instagram.

Would be glad if I'm wrong, but I don't think so. They just haven't reached a sufficient level of monopolization for this and at the same time, the number of people watching YouTube without at least being logged in is probably already dwindling.

So they're not waiting anymore to be profitable, they already are, through ads and data collection.

But they have plenty of headroom left to truly start boiling the frog, and become a closed platform.


Why don't they just deploy a per-platform .exe equivalent per video?


Because the LLM craze has rendered last-gen Tensor accelerators from NVIDIA (& others) useless for all those FP64 HPC workloads. From the article:

> The Hopper H200 is 47.9 gigaflops per watt at FP64 (33.5 teraflops divided by 700 watts), and the Blackwell B200 is rated at 33.3 gigaflops per watt (40 teraflops divided by 1,200 watts). The Blackwell B300 has FP64 severely deprecated at 1.25 teraflops and burns 1,400 watts, which is 0.89 gigaflops per watt. (The B300 is really aimed at low precision AI inference.)


Do cards with intentionally handicapped FP64 actually use anywhere near their TDP when doing FP64? It's my understanding that FP64 performance is limited at the hardware level--whether by fusing off the extra circuits, or omitting them from the die entirely--in order to prevent aftermarket unlocks. So I would be quite surprised if the card could draw that much power when it's intentionally using only a small fraction of the silicon.


It's really to save die space for other functions, AFAIU there is no fusing to lock the features or anything like this.


I'm finding conflicting info on this. It seems to be down to the specific GPU/core/microarchitecture. In some cases, the "missing" FP64 units do physically exist on the dies, but have been disabled--likely some of them were defective in manufacturing anyway--and this disabling can't be undone with custom firmware AFAIK (though I believe modern nVidia cards will only load nVidia-signed firmware anyway). Then, there are also dies that don't include the "missing" FP64 units at all, and so there's nothing to disable (though manufacturing defects may still lead to other components getting disabled for market segmentation and improved yields). This also seems to be changing over time; having lots of FP64 units and disabling them on consumer cards seems to have been more common in the past.

Nevertheless, my point is more that if FP64 performance is poor on purpose, then you're probably not using anywhere near the card's TDP to do FP64 calculations, so FLOPS/watt(TDP) is misleading.


In general: consumer cards with very bad FP64 performance have it fused off for product segmentation reasons, datacenter GPUs with bad FP64 performance have it removed from the chip layout to specialize for low precision. In either case, the main concern shouldn't be FLOPS/W but the fact that you're paying for so much silicon that doesn't do anything useful for HPC.


This theory only makes sense if consumer cards are sharing dies with enterprise/datacenter cards. If the consumer card SKUs are on their own dies, they're not going to etch something into silicon only to then fuse it off after the fact.

Regardless, there's "tricks" you can use to sort of extend the precision of hardware floating point - using a pair of e.g. FP32 numbers to implement something that's "almost" a FP64. Well known among numerics practitioners.


Until recently, consumer, workstation, and datacenter GPUs would all share a single core design that was instantiated in varying quantities per die to create a product stack. The largest die would often have little to no presence in the consumer market, but fundamentally it was made from the same building blocks. Now, having an entirely separate or at least heavily specialized microarchitecture for data center parts is common (because the extra design costs are worth it), but most workstation cards are still using the same silicon as consumer cards with different binning and feature fusing.


consumer cards don't share dies with datacenter cards, but they do share dies with workstation cards (the formerly quadro line), ex. the GB202 die is used by both the RTX PRO 5000/6000 Blackwell and the RTX 5090


I know some consumer cards have artificially limited FP64, but the AI focused datacenter cards have physically fewer FP64 units. Recently, the GB300 removed almost all of them, to the point that a GB300 actually has less FP64 TFLOPS than a 9 year old P100. FP32 is the highest precision used during training so it makes sense.


A 53×53 bit multiplier is more than 4× the size of a 24×24 bit multiplier.


It's only for Rust binaries that are built with the the -linux-musl* (instead -linux-gnu*) toolchains, which are not the default, and usually used to make portable/static binaries.


Unless you're on a distro like Alpine where musl is the system libc. Which is common in, e.g., containers.


It's still possible to build Rust binaries with jemalloc if you need the performance (or another allocator). Also, it will heavily depend on the usecase; for many usecases, Rust will in fact pressure the heap less, precisely because it tracks ownership, and passing or returning structs by value (on the stack) is often "free" if you pass the ownership as well.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: