Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or DigitalOcean, which also offers many of the big-cloud perks like managed db, object storage, load balancers and kubernetes.


There's one critical thing that DigitalOcean, Linode, Vultr, etc. don't provide though: multiple data centers in the same region ("availability zones") with automated cross-zone failover and private networking between zones.


You were pre-conditioned to believe that it's a feature. Traditionally, datacenter itself is supposed to provide HA for both the power and the network.

"Availability zones" exist for AWS convenience, not for yours. It allows for cheaper and simpler DC ops, removes the need for redundant generators, simplifies network design, and makes "under-cloud" maintenance easier. It's a feature for them, a headache for you, and a (brilliantly addressed) challenge for AWS product marketing.


I don’t know about you, but I’ve been in several DC power failures where the fault was in the transfer switch.

It sure is nice to have separate failure domains with low enough latency between them to pretty much ignore it in application architectures.


Colos will rarely lose power, but having your line cut due to a backhoe is pretty common. Even in top tier facilities I observed some loss of service every 6-12 months, add in some misconfiguration risk and Colo failure becomes a frighteningly common affair.

This can be mitigated through redundant service providers, careful checks on shared interconnects and other measures - but having "hard" failure isolation at the facility level will also get you there with less chance of someone doing something dumb.


This kind of thinking is how you end up in a newspaper article where you're in a building in new york babysitting a generator during a hurricane while everyone sane is serving from Atlanta now.

You're doing it wrong. Plan to lose sites. If you plan to never lose a building, you're just setting yourself up for pain by optimizing for the wrong kind of redundancy.


I disagree. AZs are completely independent data centers kilometres apart. For any businesses which may need low latency but still want full HA (e.g. finance systems), it's a blessing. This requirement cannot be fully covered with separate data center regions, something like an airplane crash would still hit everything.


BinaryLane in Australia have VPC across datacenters


Not gonna lie, I have been scared to deploy to DigitalOcean after that debacle [0] posted here last year. It's worth the premium to me not to worry about that at night. Yes, AWS could shut me down too, but the probability seems lower.

[0] "Digital Ocean Killed Our Company", Hacker News, May 2019. https://news.ycombinator.com/item?id=20064169


Yep, now that DO has managed DB instances it’s become my go to provider for most setups.

Droplets (VPS) have generally worked out cheaper than EC2 (with more resources) for me.

The bit that initially sold me on it a while ago was no more worrying about CPU credits, ie consistently maxing the CPU out at 100% then getting throttled, on the small instances (T2).

Also the web interface isn’t a mishmash of 1000 services with different UX for each section.

If DO isn’t cheap enough for people there’s also Vultr which works out even cheaper ($2.50 a month for 1cpu/512mb) if you want something similar (not baremetal).


If you are extremely cheap like me, only have to run cronjobs at certain times or only need your instances to be running at predictable times you can also use the [AWS Instance Scheduler](https://aws.amazon.com/solutions/implementations/instance-sc... ). At my last job we were running SAP on EC2 and we were able to lower our bill like 50% or so by only running the instance 9-5 Mon-Fri. Now I use the Instance Scheduler for running cronjobs every day on a T3a instance and it costs less than a $1 usd per month. You can also configure your cronjob to stop the instance once it ends, that way the scheduler will only need to start it and you'll save the most (./myscript && shutdown -h now)


Is the bottom of the barrel VM's really what AWS is aiming for though? Or is it the auto scaling, variable, HA workloads...


Depending on the amplitude of your load cycle, it may be cheaper to just stay fully provisioned all the time on another provider.


> Vultr which works out even cheaper ($2.50 a month for 1cpu/512mb)

I can see that price on the pricing page but when I go the Deploy new server page, I cant see that price, the minimum is $5.


New York and Atlanta DC only.

I wrote down pros/cons of various ~$5 and below VPS services in a Gist I have been maintaining for a couple of years: https://gist.github.com/frafra/4688b146ca6d55accb768c3557939...


Hope you can add Linode here. I've been a long time user and it's a pretty good & popular option imo.


Not if you need any CPU performance to speak of. Given the misleading "vCPUs" that we are sold, I took some time to benchmark several major cloud providers and the results were… worrying: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...

BTW, I run my SaaS on real servers from Hetzner. I figured I won't need instant auto-scaling anyway, and if you provision with ansible it doesn't really matter if it's an EC2 instance or a real server. What does matter is the price and the performance: I get servers which are significantly faster than anything you can get on AWS, and for a much better price.

To be honest, I do not understand the drive towards AWS. It makes sense in micro-deployments (lightsail and/or lambda, when your usage is sporadic) and in large deployments that need dynamic scaling. But it does not make much sense anywhere in the middle.


Good article, thank you for it. I only heard good things about Hetzner from many people, incl. networking setup.

As an owner of an iMac Pro (the 10c/20t CPU variant) I have to tell you that you are slightly mistaken -- the Xeon-2150B is a pretty strong CPU even compared to the bursty high-end desktop i9 CPUs. Sure I mostly do parallel compilation and work with (and produce) highly parallel programs, and there it really really shines.

But even for the occasional huge JS compilation it still performs pretty well. (Turbo boost puts it at about 4.3Ghz is the max I've seen.)

The iMac Pro's only real downside is that it has only 4 memory channels so as lightning-fast its NVMe SSD is, and as efficient the CPU with its huge caches is, the machine can likely perform anywhere from 30% to 80% faster if it had 8 memory channels.

But in general, iMac Pro is a very, and I really mean very, solid developer / design / professional software user machine.

I mean, all of that doesn't matter much in a world where the Threadripper 3960x / 3970x / 3990x now exist but still. iMac Pro is still the best Mac you can buy (Mac Pro 2019 uses the very last possible high-end desktop Xeons and I don't think Intel will be making many more of these; AMD is definitely on their tails and I don't think Apple will produce that many Mac Pros).

That being said, I am looking forward to buy a Threadripper 3990x monster machine somewhere in the next 1-2 years with dual 4k or 5k displays. Hopefully the Linux community will finally get its act together and make proper sub-pixel font aliasing by then...


> As an owner of an iMac Pro (the 10c/20t CPU variant) I have to tell you that you are slightly mistaken -- the Xeon-2150B is a pretty strong CPU even compared to the bursty high-end desktop i9 CPUs.

I do not think I am mistaken. I haven't yet encountered a Xeon that can beat the desktop i9 in single-core performance. Looking at geekbench scores (https://browser.geekbench.com/mac-benchmarks), my iMac is 10.5% faster in single-core than your iMac Pro, and your iMac Pro is about 15% faster in multi-core. For development work, I will take single-core performance any day.


Use cases and optimizing for them, I suppose.

I do mostly parallel work and for me the Xeon has proved itself as a better CPU compared to several desktop and laptop grade CPUs I also tried.


Words of sense are rare to be heard.

There has been a lot of comparisons and benchmarks and there will be even more.

After some point monthly bills grow to the scale where it makes perfect sense to invest money and time into your own infra. Yet People will still be throwing money into the hype oven.


I use Linode, but it was $10 or more, while I've just seen there's also a $5 VPS called "Nanode". I added it to the list and I will test it more in the future. Thank you!


Your list is missing quite a few major providers in the "low end" VPS space, such as BuyVM, Ramnode, Virmach, HostHatch, and probably others I'm forgetting.


Thanks, I read your comment on the gist too; I added them, but I have no direct experience. My gist was not intended to be complete, but a mostly EU well known VPS providers. A proper git repository with a nice markdown table would be better probably.


I personally use RamNode (a very small, 128M instance as a backup DNS in their Amsterdam DC) and can't really fault them. They've been rock solid so far.


Heard good stuff 'bout Ramnode.



Ah ok, I am using a $5 DO server as file server in Singapore, I thought I might save a few more bucks by moving to vultr but its ok $5 is pretty cheap.


This will work much better as an online spreadsheet. Thank you for the effort though!


CloudSigma is another player in that space. (No affiliation)


You are right, I have never heard of that thanks. I just added them to the list.


Do you host your entire stack on DO, or split compute and DB (and eat the latency hit)?


They also offer spamming services, phishing & Trojan hosting, and will block abuse reports for weeks at a time :)


Put a different way: they won't assume your service is abusive based on a few reports, and will allow you some time to address complaints yourself.


Is there a better way to handle such incidents (serious question, no sarcasm)? I feel like being too receptive to abuse reports would allow anyone to take down your service by submitting fraudulent abuse reports.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: