February 28, 2017. S3 went down and took down a good portion of AWS and the Internet in general. For almost the entire time that it was down, the AWS status page showed green because the up/down metrics were hosted on... you guessed it... S3.
I used to work at a company where the SLA was measured as the percentage of successful requests on the server. If the load balancer (or DNS or anything else network) was dropping everything on the floor, you'd have no 500s and 100% SLA compliance.
I’ve been customer for at least four separate products where this was true.
I can’t explain why Saucelabs was the most grating one, but it was. I think it’s because they routinely experienced 100% down for 1% of customers, and we were in that one percent about twice a year. <long string of swears omitted>
I spent enough time ~15 years back to find an external monitoring service that did not run on AWS and looked like a sustainable business instead of a VC fueled acquisition target - for our belts-n-braces secondary monitoring tool since it's not smart to trust CloudWatch to be able to send notifications when it's AWS's shit that's down.
Sadly while I still use that tool a couple of jobs/companies later - I no longer recommend it because it migrated to AWS a few years back.
(For now, my out-of-AWS monitoring tool is a bunch of cron jobs running on a collections of various inexpensive vpses and my and other dev's home machines.)
Interestingly, the reason I originally looked for and started using it was an unapproved "shadow IT" response to an in-house Nagios setup that was configured and managed so badly it had _way_ more downtime than any of the services I'd get shouted about at if customers noticed them down before we did...
(No disrespect to Nagios, I'm sure a competently managed installation is capable of being way better than what I had to put up with.)
You need to send a photo of your FSB / KGB id to be able to get recognized as a true conservative from USA + you need to post the propaganda of the day
> Also many families try to push elderly to hospital for few days.
And rightly so. As people age and their health deteriorates, often a few day's monitoring and nursing care can forestall downward spirals, or catch sudden downturns so family can react appropriately, instead of the usual sudden crises and desperate scrambles. This is nursing, and hospitals used to do it, which I think is what your comment implies. But these days, without the right golden-key "diagnosis", nothing happens, and those diagnoses are certainly deployed "strategically".
There's no other accessible institution that really provides this kind of care, thus dumping responsibility on the beleaguered "community", as the social workers call it. If you happen to have insufficient wealth and willing empowered family to take care of this, I strongly suggest not getting sick. Or old.
Yeah, I heard about when ChatGPT removed I think it was O4? apparently there's an entire community devoted to "role playing" or rather dating a character powered by the model. GPT5 broke their entire relationship. Me and a few friends agreed it might have been worth spinning up a dating GPT O4 based service for those people to pay to migrate their AI 'companions' to. Azure still has these "abandoned" models.
On the other note, it also helps OpenAI because they don't have to manage setting up all that infrastructure just to let others use the model.
At the end od the day it is not the CEO who invents / designs / builds the stuff.
I mean, they could recognize that Nvdidia has better ecosystem and they could provide something similar (CEO's job is to set this), but saying "just make better chips" is kind of funny.
I mean, I worked in companies where the CEO couldnt figure this out, but come on. It's AMD.
This is just the storage cost. That is they will keep your data on their servers, nothing more.
Now if you want to do something with the data, that's where you need to hold your wallet. Either you get their compute ($$$ for Amazon) or you send it to your data centre (egress means $$$ for Amazon).
When you start to do math, hard drive are cheap when you go for capacity and not performance.
0.00099*1000 is 0.99. So about 12$ a year. Now extrapolate something like 5 year period or 10 year period. And you get to 60 to 120$ for TB. Even at 3 to 5x redundancy those numbers start to add up.
S3 does not spend 3x drives to provide redundancy. Probably 20% more drives or something like that. They split data to chunks and use erasure coding to store them in multiple drives with little overhead.
AFAIK geo-replication between regions _does_ replicate the entire dataset. It sounds like you're describing RAID configurations, which are common ways to provide redundancy and increased performance within a given disk array. They definitely do that too, but within a zone
You have 100 bytes file. You split it into 10 chunks (data shards) and add 11-th chunk (parity shard) as XOR of all 10 chunks. Now you store every chunk on separate drive. So you have 100 bytes and you spent 110 bytes to store them all. Now you can survive one drive death, because you can recompute any missing chunk as XOR of all chunks.
That's very primitive explanation, but should be easy to understand.
In reality S3 uses different algorithm (probably Reed-Solomon codes) and some undisclosed number of shards (probably different for different storage classes). Some say that they use 5 of 9 (so 5 data shards + 4 parity shards which makes for 80% overhead), but I don't think it's official information.
Mate, this is better than an entire nation's data getting burned.
Yes its pricey but possible.
Now its literally impossible.
I think that AWS Glacier at that scale should be the thing preferred as they had their own in house data too but they still should've wanted an external backup and they are literally by the govt. so they of all people shouldn't worry about prices.
Have secure encrypted backups in aws and other possibilities too and try to create a system depending on how important the treat model is in the sense that absolutely filter out THE MOST important stuff out of those databases but that would require them to label it which I suppose would make them gather even more attention to somehow exfiltrate / send them to things like north korea/china so its definitely a mixed bag.
my question as I said multiple times, why didn't they build a backup in south korea only and used some other datacentre in south korea only as the backup to not have to worry about encryption thing but I don't really know and imo it would make more sense for them to actually have a backup in aws and not worry about encryption personally since I find the tangents of breaking encryption a bit unreasonable since if that's the case, then all bets are off and the servers would get hacked too and that was the point of phrack with the advanced persistent threat and so much more...
are we all forgetting that intel has a proprietory os minix running in the most privileged state which can even take java bytecode through net and execute it and its all proprietory. That is a bigger security threat model personally to me if they indeed are using that which I suppose they might be using.
If the server didnt work - the tool too measure didnt work too! Genius