There is no problem with that, as Bitcoin itself will never reach the transaction volume of regular cash - it couldn't even handle that given the number of transactions (nor does it make sense to store your coffee payment on any given day for a lifetime in the blockchain). Thats where layer 2 solution such as Lightning come into play - for everyday, more privacy-friendly smaller transactions that are not put on the blockchain. And on Layer 2, you can go even smaller in denominations.
To reiterate and paraphrase: "we need more coffee shops to accept bitcoin" "but it's not divisible enough" "that's fine because we don't need coffee shops to accept bitcoin"
Well to be fair, Lightning IS bitcoin - and inevitably tied to it. It is just a different protocol, or a different mean of accounting and transferring ownership of bitcoin.
Just imagine Gold was used as money, but you quickly realize that weighing and dividing gold is cumbersome. So someone creates a layer 2, prints green paper bills in arbitrary denominations and that very someone guarantees you can exchange those green paper bills at a fixed rates for ounces of gold. You exchange green paper bills, the green paper bills have value, but the underlying asset still is gold. Until that someone decided no longer to do the exchange to gold for those green sheets of paper, and basically performed the first rug-pull in history. That is where bitcoin steps in, as its decentralization guarantees there is no such rug-pull. And lightning is the green bills that can be exchanged for the underlying bitcoin anytime, without relying on the promise of a third party.
There's an interview with Torvalds where he states that his daughter told him that in the computer lab at her college Linus is more known for Git than Linux
That 10% missing are event normalization and event bubbling (preact bubble through DOM, react follows vDOM). Choose yourself what you need, but 100% compat with 3rd party libs is why I stick with react over preact.
Maybe unrelated, but yesterday I went to pick up my package from an Amazon Locker in Germany, and the display said "Service unavailable". I'll wait until later today before I go and try again.
I wonder why a package locker needs connectivity to give you a package. Since your package can't be withdrawn again from a different location, partitioning shouldn't be an issue.
Generally speaking, it's easier to have computation (logic, state, etc.) centralized. If the designers didn't prioritize scenarios where decentralization helped, then centralization would've been the better option.
Just a couple of days ago in this HN thread [0] there were quite some users claiming Hetzner is not an options as their uptime isn't as good as AWS, hence the higher AWS pricing is worth the investment. Oh, the irony.
As a data point, I've been running stuff at Hetzner for 10 years now, in two datacenters (physical servers). There were brief network outages when they replaced networking equipment, and exactly ONE outage for hardware replacement, scheduled weeks in advance, with a 4-hour window and around 1-2h duration.
It's just a single data point, but for me that's a pretty good record.
It's not because Hetzner is miraculously better at infrastructure, it's because physical servers are way simpler than the extremely complex software and networking systems that AWS provides.
Well the complexity comes not from Kubernetes per se but that the problem it wants to solve (generalized solution for distributed computing) is very hard in itself.
Only if you actually has a system complex enough to require it. A lot of systems that use kubernetes are not complex enough to require it, but use it anyway. In that case kubernetes does indeed add unnecessary complexity.
Except that k8s doesn't solve the problem of generalized distributed computing at all. (For that you need distributed fault-tolerant state handling which k8s doesn't do.)
K8s solves only one problem - the problem of organizational structure scaling. For example, when your Ops team and your Dev team have different product deadlines and different budgets. At this point you will need the insanity of k8s.
I am so happy to read that someone views kubernetes the same way I do. for many years i have been surrounded by people who "kubernetes all the things" and that is absolute madness to me.
Yes, I remember when Kubernetes hit the scene and it was only used by huge companies who needed to spin-up fleets of servers on demand. The idea of using it for small startup infra was absurd.
As another data point, I run a k8s cluster on Hetzner (mainly for my own experience, as I'd rather learn on my pet projects vs production), and haven't had any Hetzner related issues with it.
So Hetzner is OK for the overly complex as well, if you wish to do so.
I think it sounds quite realistic especially if you’re using something like Talos Linux.
I’m not using k8s personally but the moment I moved from traditional infrastructure (chef server + VMs) to containers (Portainer) my level of effort went down by like 10x.
yes and mobile phones existed before smartphones, what's the point? So far in terms of scalability nothing beats k8s. And from OpenAI and Google we also see that it even works for high performance use case such as LLM trainings with huge amounts of nodes.
On the other hand, I had the misfortune of having a hardware failure on one of my Hetzner servers. They got a replacement harddrive in fairly quickly, but still complete data loss on that server, so I had to rebuild it from scratch.
This was extra painful, because I wasn't using one of the OS that is blessed by Hetzner, so it requires a remote install. Remote installs require a system that can run their Java web plugin, and that have a stable and fast enough connection to not time out. The only way I have reliably gotten them to work is by having an ancient Linux VM that was also running in Hetzner, and had the oldest Firefox version I could find that still supported Java in the browser.
My fault for trying to use what they provide in a way that is outside their intended use, and props to them for letting me do it anyway.
That can happen with any server, physical or virtual, at any time, and one should be prepared for it.
I learned a long time ago that servers should be an output of your declarative server management configuration, not something that is the source of any configuration state. In other words, you should have a system where you can recreate all your servers at any time.
In your case, I would indeed consider starting with one of the OS base installs that they provide. Much as I dislike the Linux distribution I'm using now, it is quite popular, so I can treat it as a common denominator that my ansible can start from.
They allow netbooting to a recovery OS from which the disks can be provisioned via an ssh session too, for custom setups.
Likely there are cases that require the remote "keyboard", but I wanted to mention that.
Do you monitor your product closely enough to know that there weren't other brief outages? E.g. something on the scale of unscheduled server restarts, and minute-long network outages?
I personally do through status monitors at larger cloud providers at 30 sec resolutions, never noticed a downtime. They will sometimes drop ICMP though, even though the host is alive and kicking.
actually, why do people block ICMP? I remember in 1997-1998 there were some Cisco ICMP vulnerabilities and people started blocking ICMP then and mostly never stopped, and I never understood why. ICMP is so valuable for troubleshooting in certain situations.
Security through obscurity mostly, I don't know who continues to push the advice to block ICMP without a valid technical reason since at best if you tilt your head and squint your eyes you could almost maybe see a (very new) script kiddie being defeated by it.
I've rarely actually seen that advice anywhere, more so 20 years ago than now but people are still clearly getting it from circles I don't run in.
I do. Routers, switches, and power redundancy are solved problems in datacenter hardware. Network outages rarely occur because of these systems, and if any component goes down, there's usually an automatic failover. The only thing you might notice is TCP connections resetting and reconnecting, which typically lasts just a few seconds.
I do for some time now, on the scale of around 20 hosts in their cloud offering. No restarts or network outages. I do see "migrations" from time to time (vm migrating to a different hardware, I presume), but without impact on metrics.
to stick to the above point, this wasn't a minute long outage.
if you care about seconds/minutes long outages, you monitor. running on aws, hetzer, ovh, or a raspberry in a shoe box makes no difference
Yes, but those days are numbered. For many years AWS was in a league of its own. Now they’ve fallen badly behind in a growing number of areas and are struggling to catch up.
There’s a ton of momentum associated with the prior dominance, but between the big misses on AI, a general slow pace of innovation on core services, and a steady stream of top leadership and engineers moving elsewhere they’re looking quite vulnerable.
Can you throw out an example or two, because in my experience, AWS is the 'it just works' of the cloud world. There's a service for everything and it works how you'd expect.
I'm not sure what feature they're really missing, but my favorite is the way they handle AWS Fargate. The other cloud providers have similar offerings but I find Fargate to have almost no limitations when compared to the others.
You’ve given a good description of IBM for most of the 80s through the 00s. For the first 20 years of that decline “nobody ever got fired for buying IBM” was still considered a truism. I wouldn’t be surprised if AWS pulls it off for as long as IBM did.
I think that the worst thing that can happen to an org is to have that kind of status ("nobody ever got fired for buying our stuff" / "we're the only game in town").
It means no longer being hungry. Then you start making mistakes. You stop innovating. And then you slowly lose whatever kind of edge you had, but you don't realize that you're losing it until it's gone
Unfortunately I think AWS is there now. When you talk to folks there they don’t have great answers to why their services are behind or not as innovative as other things out there. The answer is basically “you should choose AWS because we’re AWS.” It’s not good.
I couldn't agree more, there was clearly a big shift when Jassy became CEO of amazon as a whole and Charlie Bell left (which is also interesting because it's not like azure is magically better now).
The improvements to core services at AWS hasn't really happened at the same pace post covid as it did prior, but that could also have something to do with overall maturity of the ecosystem.
Although it's also largely the case that other cloud providers have also realized that it's hard for them to compete against the core competency of other companies, whereas they'd still be selling the infrastructure the above services are run on.
Given recent earnings and depending on where things end up with AI it’s entirely plausible that by the end of the decade AWS is the #2 or #3 cloud provider.
AWS' core advantage is price. No one cares if they are "behind on AI" or "the VP left." At the end of the day they want a cheap provider. Amazon knows how to deliver good-enough quality at discount prices.
That story was true years ago but I don’t know that it rings true now. AWS is now often among the more expensive options, and with services that are struggling to compete on features and quality.
Simultaneously too confused to be able to make their own UX choices, but smart enough to understand the backend of your infrastructure enough to know why it doesn't work and excuses you for it.
The morning national TV news (BBC) was interrupted with this as breaking news, and about how many services (specifically snapchat for some reason) are down because of problems with "Amazon's Web Services, reported on DownDetector"
I thought we didn't like when things were "too big to fail" (like the banks being bailed out because if we didn't the entire fabric of our economy would collapse; which emboldens them to take more risks and do it again).
A typical manager/customer understands just enough to ask their inferiors to make their f--- cloud platform work, why haven't you fixed it yet? I need it!
In technically sophisticated organizations, this disconnect simply floats to higher levels (e.g. CEO vs. CTO rather than middle manager vs. engineer).
I only got €100.000 bounded to a year, then a 20% discount for spend in the next year.
(I say "only" because that certainly would be a sweeter pill, €100.000 in "free" credits is enough to make you get hooked, because you can really feel the free-ness in the moment).
> In the end, you live with the fact that your service might be down a day or two per year.
This is hilarious. In the 90s we used to have services which ran on machines in cupboards which would go down because the cleaner would unplug them. Even then a day or two per year would be unacceptable.
On one hand it's allows to shift the blame but on other hand is shows a disadvantage of hyper centralization - if AWS is down too many important services are down at the same time which makes it worse. E. g. when AWS is down it's important to have communication/monitoring services UP so engineers can discuss / co-ordinate workarounds and have good visibility but Atlassian was (is) significantly degraded today too.
100%. When AWS was down, we'd say "AWS is down!", and our customers would get it. Saying "Hetzner is down!" raises all sorts of questions your customers aren't interested in.
I've ran a production application off Hetzner for a client for almost a decade and I don't think I have had to tell them "Hetzner is down", ever, apart from planned maintenance windows.
Hosting on second- or even third-tier providers allows you to overprovision and have much better redundancy, provided your solution is architected from the ground up in a vendor agnostic way. Hetzner is dirt cheap, and there are countless cheap and reliable providers spread around the globe (Europe in my case) to host a fleet of stateless containers that never fail simultaneously.
Stateful services are much more difficult, but replication and failover is not rocket science. 30 minutes of downtime or 30 seconds of data loss rarely kill businesses. On the contrary, unrealistic RTOs and RPOs are, in my experience, more dangerous, either as increased complexity or as vendor lock-in.
Customers don't expect 100% availability and noone offers such SLAs. But for most businesses, 99.95% is perfecty acceplable, and it is not difficult to have less than 4h/year of downtime.
The point seems to be not that Hetzner will never have an outage, but rather that they have a track record of not having outages large enough for everyone to be affected.
Seems like large cloud providers, including AWS, are down quite regularly in comparison, and at such a scale that everything breaks for everyone involved.
> The point seems to be not that Hetzner will never have an outage, but rather that they have a track record of not having outages large enough for everyone to be affected.
If I am affected, I want everyone to be affected, from a messaging perspective
Okay, that helps for the case when you are affected. But what about the case when you are not affected and everyone else is? Doesn't that seem like good PR?
Take the hit of being down once every 10 years compared to being up for the remaining 9 that others are down.
> An Amazon Web Services outage is causing major disruptions around the world. The service provides remote computing services to many governments, universities and companies, including The Boston Globe.
> On DownDetector, a website that tracks online outages, users reported issues with Snapchat, Roblox, Fortnite online broker Robinhood, the McDonald’s app and many other services.
That's actually a fairly decent description for the non-tech crowd and I am going to adopt it, as my company is in the cloud native services space and I often have a problem explaining the technical and business model to my non-technical relatives and family - I get bogged down in trying to explain software defined hardware and similar concepts...
I asked ChatGPT for a succinct definition, and I thought it was pretty good:
“Amazon Web Services (AWS) is a cloud computing platform that provides on-demand access to computing power, storage, databases, and other IT resources over the internet, allowing businesses to scale and pay only for what they use.”
For us techies yes, but to the regular folks that is just as good as our usual technical gobbledy-gook - most people don´t differentiate between a database and a hard-drive.
> access to computing power, storage, databases, and other IT resources
could be simplified to: access to computer servers
Most people who know little about computers can still imagine a giant mainframe they saw in a movie with a bunch of blinking lights. Not so different, visually, from a modern data center.
You can argue about Hetzner's uptime, but you can 't argue about Hetzner's pricing which is hands down the best there is. I'd rather go with Hetzner and cobble up together some failover than pay AWS extortion.
I switched to netcup for even cheaper private vps for personal noncritical hosting. I'd heard of netcup being less reliable but so far 4 months+ uptime and no problems. Europe region.
Hetzner has the better web interface and supposedly better uptime, but I've had no problems with either. Web interface not necessary at all either when using only ssh and paying directly.
I used netcup for 3 years straight for some self hosting and never noticed an outage. I was even tracking it with smokeping so if the box disappeared I would see it but all of the down time was mine when I rebooted for updates. I don't know how they do it but I found them rock solid.
I've been running my self-hosting stuff on Netcup for 5+ years and I don't remember any outages. There probably were some, but they were not significant enough for me to remember.
netcup is fine unless you have to deal with their support, which is nonexistent. Never had any uptime issues in the two years I've been using them, but friends had issues. Somewhat hit or miss I suppose.
Exactly. Hetzner is the equivalent of the original Raspberry Pi. It might not have all fancy features but it delivers and for the price that essentially unblocks you and allows you to do things you wouldn't be able to do otherwise.
> I'd rather go with Hetzner and cobble up together some failover than pay AWS extortion.
Comments like this are so exaggerated that they risk moving the goodwill needle back to where it was before. Hetzner offers no service that is similar to DynamoDB, IAM or Lambda. If you are going to praise Hetzner as a valid alternative during a DynamoDB outage caused by DNS configuration, you would need to a) argue that Hetzner is a better option regarding DNS outages, b) Hetzner is a preferable option for those who use serverless offers.
I say this as a long-time Hetzner user. Herzner is indeed cheaper, but don't pretend that Herzner let's you click your way into a highly-availale nosql data store. You need non-trivial levels of you're ow work to develop, deploy, and maintain such a service.
> but don't pretend that Herzner let's you click your way into a highly-availale nosql data store.
The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda - is something I've only heard from people who've done AWS quickstarts but never run anything at scale in AWS.
Of course nobody else offers AWS products, but people use AWS for their solutions to compute problems and it can be easy to forget virtually all other providers offer solutions to all the same problems.
>The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda
With some services I'd agree with you, but DynamoDB and Lambda are easily two of their 'simplest' to configure and understand services, and two of the ones that scale the easiest. IAM roles can be decently complicated, but that's really up to the user. If it's just 'let the Lambda talk to the table' it's simple enough.
S3/SQS/Lambda/DynamoDB are the services that I'd consider the 'barebones' of the cloud. If you don't have all those, you're not a cloud provider, your just another server vendor.
Not if you want to build something production ready. Even a simple thing like say static IP ingress for the Lambda is very complicated. The only AWS way you can do this is by using Global Accelerator -> Application Load Balancer -> VPC Endpoint -> API Gateway -> Lambda !!.
There are so many limits for everything that is very hard to run production workloads without painful time wasted in re-architecture around them and the support teams are close to useless to raise any limits.
Just in the last few months, I have hit limits on CloudFormation stack size, ALB rules, API gateway custom domains, Parameter Store size limits and on and on.
That is not even touching on the laughably basic tooling both SAM and CDK provides for local development if you want to work with Lambda.
Sure Firecracker is great, and the cold starts are not bad, and there isn't anybody even close on the cloud. Azure functions is unspeakably horrible, Cloud Run is just meh. Most Open Source stacks are either super complex like knative or just quite hard to get the same cold start performance.
We are stuck with AWS Lambda with nothing better yes, but oh so many times I have come close to just giving up and migrate to knative despite the complexity and performance hit.
>Not if you want to build something production ready.
>>Gives a specific edge case about static IPs and doing a serverless API backed by lambda.
The most naive solution you'd do on any non-cloud vendor, just have a proxy with a static ip that then routes traffic whereever it needed to go, would also work on AWS.
So if you think AWS's solution sucks why not just go with that? What you described doesn't even sound complicated when you think of the networking magic behind the scenes that will take place if you ever do scale to 1 million tps.
Don’t know what you think should mean but for me that means
1. Declarative IaaC in either in CF/terraform
2. Fully Automated discovery which can achieve RTO/RPO objectives
3. Be able to Blue/Green and % or other rollouts
Sure I can write ansible scripts, have custom EC2 images run HA proxy and multiple nginx load balancers in HA as you suggest, or host all that to EKS or a dozen other “easier” solutions
At the point why bother with Lambda ? What is the point of being cloud native and serverless if you have to literally put few VMs/pod in front and handle all traffic ? Might as well host the app runtime too .
> doesn’t even sound complicated .
Because you need a full time resource who is AWS architect and keeps up with release notes and documentation or training and constantly works to scale your application - because every single component has a dozen quotas /limits and you will hit them - it is complicated.
If you spend few million a year on AWS then spending 300k on an engineer to do just do AWS is perhaps feasible .
If you spend few hundred thousands on AWS as part of mix of workloads it is not easy or simple.
The engineering of AWS impressive as it maybe has nothing to the products being offered . There is a reason why Pulumi, SST or AWS SAM itself exist .
Sadly SAM is so limited I had to rewrite everything to CDK in couple of months . CDK is better but I am finding that I have to monkey patching limits on CDK with the SDK code now, while possible , the SDK code will not generate Cloudformation templates .
> Don’t know what you think should mean but for me that means
I think your inexperience is showing, if that's what you try to mean by "production-ready". You're making a storm in a teacup over features that you automatically onboard if you go through an intro tutorial, and "production-ready" typically means way more than a basic run-of-the-mill CICD pipeline.
As most of the times, the most vocal online criticism comes from those who have the least knowledge and experience over the topic they are railing against, and their complains mainly boil down to criticising their own inexperience and ignorance. There is plenty of things to criticize AWS for, such as cost and vendor lock-in, but being unable and unwilling to learn how to use basic services is not it.
Try telling that to customers who can only do outbound API calls to whitelisted IP addresses
When you are working with enterprise customers or integration partners it doesn’t even have to be regulated sectors like finance or healthcare, these are basic asks you cannot get away from .
people want to be able to know whitelist your egress and ingress IPs or pin certificates. It is not up to me to say on efficacy of these rules .
I don’t make the rules of the infosec world , I just follow them.
This architecture[1] requires the setup of 2 NAT gateways (one in each AZ), a routing table, an Internet Gateway, 2 Elastic IP and also the VPC. Since as before we cannot use Function URLs for Lambda we will still need the API Gateway to make HTTP calls.
The only parts we are swapping out `GA -> ALB -> VPC` for `IG -> Router -> NAT -> VPC`.
Is it any simpler ? Doesn't seem like it is to me.
Going the NAT route means, you also need to have intermediate networking skills to handle a routing table (albeit a simple one), half the developers of today never used IP tables is or what chaining rules is.
---
I am surprised at the amount of pushback on a simple point which should be painfully obvious.
AWS (Azure/GCP are no different) has become overly complex with no first class support for higher order abstractions and framework efforts like SAM or even CDK seem to getting not much love at all in last 4-5 years.
Just because they offer and sell all these components to be independently, doesn't mean they should not invest and provide higher order abstractions for people with neither bandwidth or the luxury to be a full time "Cloud Architect".
There is a reason why today Vercel, Render or Railway others are popular despite mostly sitting on top of AWS.
On Vercel the same feature would be[1] quite simple. They use the exact solution you suggest on top of AWS NAT gateway, but the difference I don't have to know or manage it, is the large professional engineering team with networking experience at Vercel.
There is no reason AWS could not have built Vercel like features on top of their offerings or do so now.
At some point small to midsize developers will avoid direct AWS by either choosing to setup Hetzner/OVH bare machines or with bit more budget colo with Oxide[3] or more likely just stick to Vercel and Railway kind of platforms.
I don't know how that will impact AWS, we will all still use them, however a ton of small customers paying close to rack rate is definitely much much higher margin than what Vercel is paying AWS for the same workload is going to be.
> With some services I'd agree with you, but DynamoDB and Lambda are easily two of their 'simplest' to configure and understand services, and two of the ones that scale the easiest. IAM roles can be decently complicated, but that's really up to the user. If it's just 'let the Lambda talk to the table' it's simple enough.
We agree, but also, I feel like you're missing my point: "let the Lambda talk to the table" is what quickstarts produce. To make a lambda talk to a table at scale in production, you'll want to setup your alerting and monitoring to notify you when you're getting close to your service limits.
If you're not hitting service limits/quotas, you're not running even close to running at scale.
> The idea you can click your way to a highly available, production configured anything in AWS - especially involving Dynamo, IAM and Lambda - is something I've only heard from people who've done AWS quickstarts but never run anything at scale in AWS.
I'll bite. Explain exactly what work you think you need to do to get your pick of service running on Hetnzer to have equivalent fault-tolerance to, say, a DynamoDB Global Table created with the defaults.
Are you Netflix? Because is not theres a 99% probability you dont need any of those AWS services and just have a severe case of shiny object syndrome in your organisation.
Plenty of heavy traffic, high redundancy applications exist without the need for AWS (or any other cloud providers) overpriced "bespoke" systems.
To be honest I don't trust myself running a HA PostgreSQL setup with correct backups without spending an exorbitant effort to investigate everything (weeks/months) - do you ? I'm not even sure what effort that would take. I can't remember last time I worked with unmanaged DB in prod where I did not have a dedicated DBA/sysadmin. And I've been doing this for 15 years now. AFAIK Hetzner offers no managed database solution. I know they offer some load balancer so there's that at least.
At some point in the scaling journey bare metal might be the right choice, but I get the feeling a lot of people here trivialize it.
If youre not Netflix then just sudo yum install postgresql and pg_dump every day, upload to S3. Has worked for me for 20 years at various companies, side projects, startups …
> If youre not Netflix then just sudo yum install postgresql and pg_dump every day, upload to S3.
database services such as DynamoDB support a few backup strategies out of the box, including continuous backups. You just need to flips switch and never bother about it again.
> Has worked for me for 20 years at various companies, side projects, startups …
That's perfectly fine. There are still developers who don't even use version control at all. Some old habits die hard, even when the whole world moved on.
That doesn’t give you high availability; it doesn’t give you monitoring and alerting; it doesn’t give you hardware failure detection and replacement; it doesn’t solve access control or networking…
Managed databases are a lot more than apt install postgresql.
If you're doing it yourself, learn Ansible, you'll do it once and be set forever.
You do not need "managed" database services. A managed database is no different from apt install postgesql followed by a scheduled backup.
Genuinely no disrespect, but these statements really make it seem like you have limited experience building an HA scalable system. And no, you don't need to be Netflix or Amazon to build software at scale, or require high availability.
Backups with wall-g and recurring pg_dump are indeed trivial. (Modulo S3 outage taking so long that your WAL files fill up the disk and you corrupt the entire database.)
It's the HA part, especially with a high-volume DB that's challenging.
But that's the thing - if I have an ops guy who can cover this then sure it makes sense - but who does at an early stage ? As a semi competent dev I can setup a terraform infra and be relatively safe with RDS. I could maybe figure out how to do it on my own in some time - but I don't know what I don't know - and I don't want to do a weekend production DB outage debugging because I messed up the replication setup or something. Maybe I'm getting old but I just don't have the energy to deal with that :)
> Are you Netflix? Because is not theres a 99% probability you dont need any of those AWS services and just have a severe case of shiny object syndrome in your organisation.
I think you don't even understand the issue you are commenting on. It's irrelevant if you are Netflix or some guy playing with a tutorial. One of the key traits of serverless offerings is how it eliminates the need to manage and maintain a service or even worry about you have enough computational resources. You click a button to provision everything, you configure your clients to consume that service, and you are done.
If you stop to think about the amount of work you need to invest to even arrive at a point where you can actually point a client at a service, you'll be looking at what the value of serverless offerings.
Ironically, it's the likes of Netflix who can put together a case against using serverless offerings. They can afford to have their own teams managing their own platform services with the service levels they are willing to afford. For everyone else, unless you are in the business of managing and tuning databases or you are heavily motivated to save pocket change on a cloud provider bill, the decision process is neither that clear not favours running your own services.
> Plenty of heavy traffic, high redundancy applications exist without the need for AWS (or any other cloud providers) overpriced "bespoke" systems.
And almost all of them need a database, a load balancer, maybe some sort of cache. AWS has got you covered.
Maybe some of them need some async periodic reporting tasks. Or to store massive files or datasets and do analysis on them. Or transcode video. Or transform images. Or run another type of database for a third party piece of software. Or run a queue for something. Or capture logs or metrics.
And on and on and and on. AWS has got you covered.
This is Excel all over again. "Excel is too complex and has too many features, nobody needs more than 20% of Excel. It's just that everyone needs a different 20%".
You're right AWS does have you covered. But that doesn't mean thats the only way of doing it. Load balancing is insanely easy to do yourself, databases even easier. Caching, ditto.
I think a few people who claim to be in devops could do with learning the basics about how things like Ansible can help them as there's a fair few people who seem to be under the impression AWS is the only, and the best option, which unless you're FAANG really is rarely the case.
You can spin up a redundant database setup with backups and monitoring and automatic fail over in 10 mins (the time it takes in AWS)? And maintain it? If you've done this a few times before and have it highly automated, sure. But let's not pretend it's "even easier" than "insanely easy".
Load balancing is trivial unless you get into global multicast LBs, but AWS have you covered there too.
> You're right AWS does have you covered. But that doesn't mean thats the only way of doing it. Load balancing is insanely easy to do yourself, databases even easier. Caching, ditto
I think you don't understand the scenario you are commenting on. I'll explain why.
It's irrelevant if you believe that you are able to imagine another way to do something, and that you believe it's "insanely easy" to do those yourself. What matters is that others can do that assessment themselves, and what you are failing to understand is that when they do so, their conclusion is that the easiest way by far to deploy and maintain those services is AWS.
And it isn't even close.
You mention load balancing and caching. The likes of AWS allows you to setup a global deployment of those services with a couple of clicks. In AWS it's a basic configuration change. And if you don't want it, you just tear down everything with a couple of clicks as well.
Why do you think a third of all the internet runs on AWS? Do you think every single cloud engineer in the world is unable to exercise any form of critical thinking? Do you think there's a conspiracy out there to force AWS to rule the world?
If you need the absolutely stupid scale DynamoDB enables what is the difference compared to running for example FoundationDb on your own using Hetzner?
> The key thing you should ask yourself: do you need DynamoDB or Lambda? Like "need need" or "my resume needs Lambda".
If you read the message you're replying to, you will notice that I singled out IAM, Lambda, and DynamoDB because those services were affected by the outage.
If Hetzner is pushed as a better or even relevant alternative, you need to be able to explain exactly what you are hoping to say to Lambda/IAM/DynamoDB users to convince them that they would do better if they used Hetzner instead.
Making up conspiracy theories over CVs doesn't cut it. Either you know anything about the topic and you actually are able to support this idea, or you're an eternal September admission whose only contribution is noise and memes.
TBH, in my last 3 years with Hetzner, i never saw a downtime to my servers other than myself doing some routin maitenance for os updates. Location Falkenstein.
You really need your backup procedures and failover procedures though, a friend bought a used server and the disk died fairly quickly leaving him sour.
Younger guy with ambitions but little experience, I think my point was that used servers with Hetzner are still used so if someone has been running disk heavy jobs you might want to request new disks or multiple ones and not just pick the cheapest options at the auction.
(Interesting that an anectode like above got downvoted)
> (Interesting that an anectode like above got downvoted)
experts almost unilaterally judge newbies harshly, as if the newbies should already know all of the mistakes to avoid. things like this are how you learn what mistakes to avoid.
"hindsight is 20/20" means nothing to a lot of people, unfortunately.
What is the Hetzner equivalent for those in Windows Server land? I looked around for some VPS/DS providers that specialize in Windows, and they all seem somewhat shady with websites that look like early 2000s e-commerce.
I work at a small / medium company with about ~20 dedicated servers and ~30 cloud servers at Hetzner. Outages have happened, but we were lucky that the few times it did happen, it was never a problem / actual downtime.
One thing to note is that there are some scheduled maintenances were we needed to react.
We've been running our services on Hetzner for 10 years, never experienced any significant outages.
That might be datacenter dependant of course, since our root servers and cloud services are all hosted in Europe, but I really never understood why Hetzner is said to be less reliable
> 99.99% uptime infra significantly cheaper than the cloud.
I guess that's another person that has never actually worked in the domain (SRE/admin) but still wants to talk with confidence on the topic.
Why do I say that? Because 99.99% is frickin easy
That's almost one full hour of complete downtime per year.
It only gets hard in the 99.9999+ range ... And you rarely meet that range with cloud providers either as requests still fail for some reason, like random 503 when a container is decommissioned or similar
The only hard dependency I am still aware of is write operations to the R53 control plane. Failover records and DNS queries would not be impacted. So business workflows would run as if nothing happened.
(There may still be some core IAM dependencies in USE1, but I haven’t heard of any.)
We don't know that (yet) - it's possible that this is simply a demonstration of how many companies have a hard dependency on us-east-1 for whatever reason (which I can certainly believe).
We'll know when (if) some honest RCAs come out that pinpoint the issue.
I had a problem with an ACME cert terraform module. It was doing the R53 to add the DNS TXT record for the ACME challenge and then querying the change status from R53.
R53 seems to use Dynamo to keep track of the syncing of the DNS across the name servers, because while the record was there and resolving, the change set was stuck in PENDING.
After DynamoDB came back up, R53's API started working.
>Just a couple of days ago in this HN thread [0] there were quite some users claiming Hetzner is not an options as their uptime isn't as good as AWS, hence the higher AWS pricing is worth the investment. Oh, the irony.
That's not necessarily ironic. Seems like you are suffering from recency bias.
I'm not affiliated and won't be compensated in any way for saying this: Hetzner are the best business partners ever. Their service is rock solid, their pricing is fair, their support is kind and helpful.
Going forward I expect American companies to follow this European vibe, it's like the opposite of enshitification.
Stop making things up. As someone who commented on the thread in favour of AWS, there is almost no mention of better uptime in any comment I could find.
I could find one or two downvoted or heavily critisized comments, but I can find more people mentioning the opposite.
I don't know how often Hetzner has similar outages, but outages at the rack and server level, including network outages and device failure happen for individual customers. If you've never experienced this, it is probably just survivor's bias.
Aws/cloud has similar outages too, but more redundancy and automatic failover/migrations that are transparent to customers happen. You don't have to worry about DDOS and many other admin burdens either.
YMMV, I'm just saying sometimes Aws makes sense, other times Hetzner does.
It's less about company loyalty and more about protecting their investment into all the buzzwords from their resumes.
As long as the illusion that AWS/clouds are the only way to do things continues, their investment will keep being valuable and they will keep getting paid for (over?)engineering solutions based on such technologies.
The second that illusion breaks down, they become no better than any typical Linux sysadmin, or teenager ricing their Archlinux setup in their homelab.
I'm a tech worker, and have been paid by a multi-billion dollar company to be a tech worker since 2003.
Aside from Teams and Outlook Web, I really don't interact with Microsoft at all, haven't done since the days of XP. I'm sure there is integration on our corporate backends with things like active directory, but personally I don't have to deal with that.
Teams is fine for person-person instant messaging and video calls. I find it terrible for most other functions, but fortunately I don't have to use it for anything other than instant messaging and video calls. The linux version of teams still works.
I still hold out a healthy suspicion of them from their behaviour when I started in the industry. I find it amusing the Microsoft fanboys of the 2000s with their "only needs to work in IE6" and "Silverlight is the future" are still having to maintain obsolete machines to access their obsolete systems.
Meanwhile the stuff I wrote to be platform-agnostic 20 years ago is still in daily use, still delivering business benefit, with the only update being a change from "<object" to "<video" on one internal system when flash retired.
AWS and Cloudflare are HN darlings. Go so far as to even suggest a random personal blog doesn't need Cloudflare and get downvoted with inane comments as "but what about DDOS protection?!"
The truth is one under the age of 35 is able to configure a webserver any more, apparently. Especially now that static site generators are in vogue and you don't even need to worry about php-fpm.
Well, we have a naming issue (Hetzner also has Hetzner Cloud, it looks people still equal cloud with the three biggest public cloud providers).
In any case, in order for this to happen, someone would have to collect reliable data (not all big cloud providers like to publish precise data, usually they downlplay the outages and use weasel words like "some customers... in some regions... might have experienced" just not to admit they had an outage) and present stats comparing the availability of Heztner Cloud vs the big three.
Thanks for that link. It seems with that introduction, they also lowered prices on the dedicated-core on their vservers - at least I was paying 15€/month and now they seem to offer it for 12€/month. I will try to see if shared performance is an option for the future.
> A lot of Apple hardware is impressive on paper, but I will never buy a Mac that can't run Linux.
They run Linux actually very well, have you ever tried Parallels or VMware Fusion? Especially Parallels ships with good softwaer drivers for 2d/3d/video acceleration, suspend, and integration into the host OS. If that is not your thing, the new native container solution in Tahoe can run container from dockerhub and co.
> I simply don't want to live in Apple's walled garden.
And what walled garden would that be on macOS? You can install what you want, and there is homebrew at your fingertips with all the open and non-open software you can ask for.
Last I looked... extensive telemetry and a sealed boot volume that makes it impractical to turn off even if theoretically possible. There are other problems of course.
You can disable SIP and even disable immutable kernel text, load arbitrary drivers, enable/disable any feature, remove any system daemon, use any restricted entitlements. The entire security model of macOS can be toggled off (csrutil from recoveryOS).
Stopping/disabling a service should be a command, like it is on Windows or Linux. Not configured on a read-only volume bundled with other security guarantees.
It's pretty simple to keep these two things separate, like everywhere else in the present and history of the industry.
No, I meant to reply to you. I was curious about your practical use case for disabling code signing (which I think is what you refer to by telemetry) and messing with the boot volume.
From what I checked, disabling SIP/AMFI/whatever it is now means I can't run iOS applications on macOS. The fact that there are restrictions on what I can run when doing that makes macOS more restrictive.
Also, what if I want to run eBPF on my laptop on bare metal, to escape the hypercall overhead from VMs or whatever? Ultimately, a VM is not the same as a native experience. I might want to take advantage of acceleration for peripherals that aren't available unless I'm bare metal.
That point is often brought up, but it kind of invalid. Because you can't run iOS on your Linux or Windows installation, too. So saying because of that usecase you are switching the OS, is kind of a spite reaction, not based on reason.
As in: "I can't run iOS on my macOS installation, so I am going to use a different OS where I can't run iOS either".
Well, it's less of a feature argument, and more of a "I philosophically don't support using an OS that prevents me from using parts of it, because I oppose losing control over the software my system runs."
I switched from pixel to iPhone in large part because pixel removed the rear fingerprint reader, headphone jack, and a UI shortcut I used multiple times a day. It’s not like the iPhone had those things but now neither did the pixel.
How does Asahi fare these days? For home use I am fine with my Fedora machine but as a former (Tiger-SL era) Mac user who's never used macOS, I am somewhat curious about this.
Remember Asahi works properly only on M1 and M2. More work is required to make it run well on later chips (its not just a faster ARM chip - it's new graphics card each time, motherboard chipset, every laptop peripheral changes from time to time, BIOS/UEFI, etc, and they all need reverse-engineered drivers for it work).
... or UTM. I have run windows and Linux on my M1 MB Pro with plenty of success.
Windows - because I needed it for a single application.
Linux - has been extremely useful as a compliment to small arm SBCs that I run. eg: Compiling a kernel is much faster there than on (say) a Raspberry Pi. Also, USB device sharing makes working with vfat/ext4 filesystems on small memory cards a breeze.
reply