It looks from the public writeup that the thing programming the DNS servers didn't acquire a lease on the server to prevent concurrent access to the same record set. I'd love to see the internal details on that COE.
I think when there is an extended outage it exposes the shortcuts. If you have 100 systems, and one or two can't start fast from zero, and they're required to get back to running smoothly, well you're going to have a longer outage. How would you deal with that, you'd uniformly across your teams subject them to start from zero testing. I suspect though that many teams are staring down a scaling bottleneck, or at least were for much of Amazon's life and so scaling issues (how do we handle 10x usage growth in the next year and half, which are the soft spots that will break) trump cold start testing. Then you get a cold start event with that last one being 5 years ago and 1 or 2 out of your 100 teams falls over and it takes multiple hours all hands on deck to get it to start.
I actually prefer a game where the rules mostly come from the DM. I think it is better if there is no players handbook. The characters develop along their story arc, e.g. at some point you character acquires new powers, e.g. your character has been spending a lot of time developing new combat moves, they kind of level up and now the DM explains a new mechanic. Your character has become adept at disarming opponents and now gets such and such a bonus to attempt a disarm.
This is a lot to place on the DM, but I like the anarchy of a system like dungeon crawler classic. You expect some of your characters to die, e.g. in one adventure my character in a last ditch effort to save himself drank a potion of unknown origin, that potion turned him into a mithral statue. It was a fitting end to his short but eventful life.
Another character played by a different player managed through a long process involving books and negociations with his patron to construct a demonic sentient flying dog through whom he could cast spells and see.
This kind of exploration I think encourages players to see their characters much more as characters than machines to be min maxed and it is way more fun.
Give the DM total control to decide the dice roles that determine the outcome of the shenanigans. You try to hire an army of peasants you're going to be dealing with appointing sergeants, logistics, mutany, desertion all before you try to line them up to throw a ladder at some dude, which in the end is probably like a 1d20 >= ac for a chance of 1d4 damage, with of course crit tables, where on a critical success the dude might be tangled up in the ladder and fall over or something.
People say don't reinvent the wheel usually in a business context because writing from scratch is usually a lot more work than using existing technologies. Sure reusing technologies is also a lot more work than you would expect because most things suck (to different degrees), but so will your newly minted wheel. Only after a lot of hard lessons will it suck less, if at all.
That said there are also contexts in which the existing system that was built sucks so bad that rewriting it usually a boon, even if the new wheel sucks, it sucks less from the start.
You at a minimum should engage with the existing wheels and their users to find the ways in which they do and don't work.
In your own time I think it is great to tinker, pull apart, assemble your own things. Every Jedi makes her own light saber right?
Especially at work, I find existing solutions often lacking. We tend to overestimate the complexity of reinventing many things, and underestimate the cost of ill-fitting abstractions.
In particular, Google-scale frameworks are usually harmfully over-engineered for smaller companies. Their challenge is solving the Google scale, not the problem at hand. At the same time, their complexity implies that a small-scale solution will likewise require a hundred-programmer team to implement.
But I find that far from the truth. In fact, many problems benefit from a purpose-built, small-scale solution. I've seen this work out many times, and result in much simpler, easier-to-debug code. Google-scale frameworks are a very bad proxy for estimating the complexity of a task.
Am I alone in thinking that all the stuff I get for free (in exchange for some amount of targeted advertising) from Google is pretty cool and that these attempts to break up big tech are going to be very bad for consumers and the economy and is just punishing successful companies that produce products that customers want to use. You all can use mosaic/edge if you want to.
You get nice stuff for free, right up until the moment Google decide that they've done enough. Then you get nothing. And the unfair funding and disparity in features means no competitors can ever provide a superior alternative.
And then it might not be for free. It's very tempting to rent-seek when you have a captive user base. That's bad for consumers and bad for the economy.
Focusing on what we get today is myopic and it's not by mistake that Google give them to us.
You are certainly not alone. I’d say you’re in the vast vast majority, just not necessarily on our little corner of the Internet (hackernews), but realistically probably the majority here as well.
Games are much easier than real work and provide more consistent dopamine hits with their graphics, sound effects and feeling of progression. Factorio while fun is a long way from real work.
There seem to be a bunch of folks for whom shaking the legs is an important part of the process. Can be a bit distracting to others in a team workspace. It makes me wonder whether they should have bicycle desks.
Yeah I'm the same. I can visualize my house. When debugging and there are large number of numbers in systems to keep track of then arranging the systems on a piece of paper just to quickly find the numbers associated with each system helps, but beyond that when thinking about code it is all maths with no spacial or visual component, just logical statements and reasoning. E.g. When I think of a shuffle-shard I don't visualize the sets, I just think, subsets of size k.
Only if the machine is directly connected to the internet and the malicious packet doesn't hit a firewall somewhere along the path.
Most laptops connected to Wi-Fi are indeed connected to an AP or a SOHO router that does NAT, so the attacker won't be able to directly reach it and this is a requirement for this to work.
Sure but security isn’t about being 100% protected which is impossible, but lowering your attack foot print. Unless you have a ton of people hooking to your LAN regularly then this still greatly lowers you chances of getting hit with this particular security flaw by people on the WAN
A useful target might be university networks, although IIRC our university printers weren’t available for discovery. Instead we would send our documents to a special email that would forward it to a local print server so we could get charged for it.
I can't help but wonder whether the major problem is actually API changing from version to version of software and keeping everything compatible.
If the build language is LUA, doesn't it support top level variables. It probably just takes a few folks manipulating top level variables before the build steps and build logic is no longer hermetic, but instead plagued by side effects.
I think you need to build inside very effective sandboxes to stop build side effects and then you need your sandboxes to be very fast.
Anyway, nice to see attempts at more innovation in the build space.
I imagine a kind of merging between build systems, deployment systems, and running systems. Somehow a manageable sea of distributed processes running on a distributed operating system. I suspect Alan Kay thought that smalltalk might evolve in that direction, but there are many things to solve including billing, security, and somehow making the sea of objects comprehensible. It has the hope of everything being data driven, aka structured, schemad, versions, json like data rather than the horrendous mess that is unix configuration files and system information.
There was an interested talk on Developer Voice perhaps related to a merger of Ocaml and Erlang that moved a little in that direction.
I think when there is an extended outage it exposes the shortcuts. If you have 100 systems, and one or two can't start fast from zero, and they're required to get back to running smoothly, well you're going to have a longer outage. How would you deal with that, you'd uniformly across your teams subject them to start from zero testing. I suspect though that many teams are staring down a scaling bottleneck, or at least were for much of Amazon's life and so scaling issues (how do we handle 10x usage growth in the next year and half, which are the soft spots that will break) trump cold start testing. Then you get a cold start event with that last one being 5 years ago and 1 or 2 out of your 100 teams falls over and it takes multiple hours all hands on deck to get it to start.