Hacker Newsnew | past | comments | ask | show | jobs | submit | MaKey's commentslogin

The comment from phil21 directly above yours calls IPv6 unreliable.

It is for many.

Stiff bristles also damage your gum more easily and can lead to gum recessions. I needed gum transplants because of this and a wrong brushing technique. For me even medium stiffness is too hard.

My problem is that soft bristles don't remove much food/plaque from the teeth and I end up having to brush way too hard.

For $1000 I'd definitely risk it and kick up a fuss about it if they locked me out.

It seems wild to me to just accept a loss of $1000 for something that isn't your fault. I'd be persistent in each contact with Amazon and if you're really not getting anywhere I'd go to small claims court or do a chargeback.

Like, I know there are some really rich people around, obviously you see them driving around in fancy cars and living in big houses, but you kinda forget that some people can just lose $1000 and ignore it like it's nothing. Crazy.

> But also, google spent a mountain of money advertising chrome.

That money was also used to increase the user base via drive-by installations, e. g. while installing Adobe Reader you had to deselect the Chrome installation, otherwise you'd find yourself with a new standard browser afterwards.


Maybe this incident will make people rethink putting Cloudflare blindly in front of every website.


In theory even a single company service could be distributed, so only a fraction of websites would be affected, thus it's not a necessity to be a single point of failure. So I still don't like this argument "you see what happens when over half of the internet relies on Cloudflare". And yes, I'm writing this as a Cloudflare user whose blog is now down because of this. Cloudflare is still convenient and accessible for many people, no wonder why it's so popular.

But, yeah, it's still a horrible outage, much worse than the Amazon one.


The "omg centralized infra" cries after every such event kind of misses the point. Hosting with smaller companies (shared, vps, dedi, colo whatever) will likely result in far worse downtimes, individually.

Ofc the bigger perception issue here is many services going out at the same time, but why would (most) providers care if their annual downtime does or doesn't coincide with others? Their overall reliability is no better or worse had only their service gone down.

All of this can change ofc if this becomes a regular thing, the absolute hours of downtime does matter.


Exactly.


I think you're being overly dramatic. In practice I've seen complexity (which HA setups often introduce) causing downtimes far more often than a service being hosted only on a single instance.


You'll have planned downtime just for upgrading MongoDB version or rebooting the instance. I don't think that this is sth you'd want to have. Running MongoDB in a replica set is really easy and much easier than running postgres or MySQL in an HA setup.

No need for SREs. Just add 2 more Hetzner servers.


The sad part of that is that 3 Hetzner servers are still less than 20% of the price of equivalent AWS resources. This was already pretty bad when AWS started, but now it's reaching truly ridiculous proportions.

from the "Serverborse": i7-7700 with 64GB ram and 500G disk.

37.5 euros/month

This is ~8 vcpus + 64GB ram + 512G disk.

585 USD/month

It gets a lot worse if you include any non-negligible internet traffic. How many machines before for your company a team of SREs is worth it? I think it's actually dropped to 100.


Sure, I am not against Hetzner, it's great. I just find that running sth in HA mode is important for any service that is vital to customers. I am not saying that you need HA for a website. Also, I run many applications NOT in HA mode but those are single customer applications where it's totally fine to do maintenance at night or on the weekend. But for SaaS this is probably not a very good idea.


Yes, any time someone says "I'm going to make a thing more reliable by adding more things to it" I either want to buy them a copy of Normal Accidents or hit them over the head with mine.


How bad are the effects of an interruption for you? Google has servers running every day, but you with one server can afford to gamble on it, since it probably won't fail for years - no matter the hardware though, keep a backup, because data loss is permanent. Would you lose millions of dollars a minute, or would you just have to send an email to customers saying "oops"?

Risk management is a normal part of business - every business does it. Typically the risk is not brought down all the way to zero, but to an acceptable level. The milk truck may crash and the grocery store will be out of milk that day - they don't send three trucks and use a quorum.

If you want to guarantee above-normal uptime, feel free, but it costs you. Google has servers failing every day just because they have so many, but you are not Google and you most likely won't experience a hardware failure for years. You should have a backup because data loss is permanent, but you might not need redundancy for your online systems. Depending on what your business does.



HA can be hard to get right, sure, but you have to at least have (TESTED) plan for what happens

"Run a script to deploy new node and load last backup" can be enough, but then you have to plan on what to tell customers when last few hours of their data is gone


Why would you get this when a Ryzen AI Max+ 395 with 128 GB is a fraction of the price?


Theoretically it has slightly better memory bandwidth, (you are supposed to get) the Nvidia AI software ecosystem support out of the box, and you can use the 200G NIC to stick 2 together more efficiently.

Practically, if the goal is 100% about AI and cloud isn't an option for some reason, both options are likely "a great way to waste a couple grand trying to save a couple grand" as you'd get 7x the performance and likely still feel it's a bit slow on larger models using an RTX Pro 6000. I say this as a Ryzen AI Max+ 395 owner, though I got mine because it's the closest thing to an x86 Apple Silicon laptop one can get at the moment.


Because the ML ecosystem is more mature on the NVidia side. Software-wise the cuda platform is more advanced. It will be hard for AMD to catch up. It is good to see competition tho.


But the article shows that the Nvidia ecosystem isn't that mature either on the DGX Spark with ARM64. I wonder if Nvidia is still ahead for such use cases, all things considered.


On the DGX Spark, yes. On ARM64, Nvidia has been shipping drivers for years now. The rest of the Linux ecosystem is going to be the problem, most distros and projects don't have anywhere near the incentive Nvidia does to treat ARM like a first-class citizen.


CUDA


WOULDA

SHOULDA


Complete computer with everything working.


The complete Framework Desktop with everything working (including said Ryzen AI Max 395+ and 128 GB of RAM) is 2500 EUR. In Europe the DGX Spark listings are at 4000+ EUR.


It's a different animal. Ryzen wins on memory bandwidth and has 'AI' accelerator (my guess matrix multiplication). Spark has times lower bandwidth, but much better and more generic compute. Add to that CUDA ecosystem with libs and tools. I'm not saying Ryzen is bad, actually it's great Mac substitute for poor man. $2K for 128GB version on Amazon now.


the macs are indeed the best consumer hw out there. they have a big downside: mac os only.

the reason we use ryzens is because we run linux with almost no problems on them.


Framework doesn't sell in Europe and they are sponsoring the wrong kind of folks nowadays.


Framework does absolutely sell in several countries in Europe.


Media market, Cool Blue, FNAC, Saturn, Publico,.... where?


Online only at https://frame.work AFAIK. I don't think people shelling out 2-4k for an AI training machine are concerned whether or not they can find it at a hardware store locally or online, but I may be wrong.


The vast majority of Ryzen AI Max+ 395s (by volume at least) are sold as complete system offerings as well. About as far as you can go the other way is getting one without an SSD, as the MB+RAM+CPU are an "all or nothing" bundle anyways.


Including a Linux distribution with working drivers?


Fortunately, AMD upstreams its changes so no custom distro is required for Strix Halo boxes. The DGX is the platform more at risk of being left behind on Linux - just like Jetson before it, which also had a custom, now-abandoned distro.


This right here, Jetson is abandoned - while Strix Halo is x86 and will run new Linux distributions for years (decades?)


Does NVIDIA really not have a defined support lifetime/cycle?



Needing a customized spin of Ubuntu to have working video drivers is an Nvidia thing. One can also choose a Windows option, if they like, and run AI from there as it's just a standard x86 PC. That might actually be the best option for those worried about pre-installed OSs for AI tinkering.

The userspace side is where AI is difficult with AMD. Almost all of the community is build around Nvidia tooling first, others second (if it all).


i cannot state how much i despise this 'old ubuntu needed' state of affairs with the ai stuff


Amd works with recent kernels oob. DGX runs on custom Ubuntu with a year old kernel


It is not what the Romc experience tells.


Does Romc=ROCm, or something else? If the former, ROCm is just a userspace compute library for the in-kernel amdgpu driver. The "kernels" it runs are GPU compute programs, not customized Linux kernels.


> Same meme would work for Aws today.

Not really, there are enough alternatives.


How any just run on AWS underneath though?

And it’s not lie there aren’t other brands of chocolate either…


What's the reason for not considering Proxmox?


They seriously need to invest in a well engineered multi node cluster filesystem. VMFS made VMware into the behemoth it is.

Without that your options for HA shared storage is Ceph (which proxmox makes decently easy to run), or NFS.


My 2 cents: Proxmox is too rigid. For example:

1. Proxmox cannot even join a network using DHCP requiring manual IP configuration.

2. Disk encryption is a hell instead of checkbox in installer

3. Wi-Fi - no luck (rarely used for real servers, but frequently for r&d racks)

Of course, it is a Debian core underneath and a lot of things are possible given enough time and persistence, but other solutions have them out of the box.


My Proxmox seems to use DHCP just fine by putting "iface eno1 inet dhcp" in /etc/network/interfaces


[flagged]


If you were more polite, you could have a good entry to the discussion.

Yes, Proxmox is built on Debian so anything Debian can do Proxmox VE can mostly do as well without major issues.


Proxmox wasn’t considered because of the audience (leadership) and Proxmox’s perceived market (SMBs/homelabs). I couldn’t even get them to take Virtuozzo seriously, so Proxmox was entirely a non-starter, unfortunately.

FWIW, I use Proxmox at home. It’s a bit obtuse at times, but it runs like a champ on my N100-based NUCs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: