Who deserves to receive help (and, by contrast, who is undeserving of even basic decency) should never, ever be the decision of a few ploutocrats. The state should decide on such matters, and be the upholder of equality. Else, misery becomes a contest for who can be the most compelling, most attractive miserable to the elites.
I think the whole point of EA is to avoid the misery contest and funnel philanthropy dollars in the most effective way, though there's a lot of disagreement inside the movement on what constitutes effectiveness.
On the radical statism -- I guess maybe? At least the in the US right now, we certainly don't have a government that's good at directing money to the most needy. See e.g. the while USAID fiasco. Even if we had a government willing to fund international aid, I still think there is room in the world for people donating to causes they care about. If nothing else, this is how new causes rise to prominence, to be recognized by the official channels.
There's no such thing as "the state" except as a societal arrangement. When you argue that "the state" should decide on who gets what, what you're really saying is that this should be the decision of a few power-crazed career bureaucrats, with no accountability whatsoever or any "skin in the game". At least the plutocrat is paying for the aid out of pocket: he will care somewhat that the money is not outright misused. Why should a random government bureaucrat be trusted to make good decisions?
There's no such thing as "society" except for societal arrangement. And societies of any decent size for the past several thousand years have arranged for governments who decide all sorts of things. Why would state aid be decided by a few "power-crazed" career bureaucrats in representative democracies? This sounds like a libertarian screed against the state doing anything but the minimal.
Because career bureaucrats are the only way of running a large organizational arrangement that can even reach a semblance of "deciding all sorts of things". A few hundred representatives can't go at it alone. You get career bureaucrats in private enterprise too, of course, but the idea is that they should at least be kept on a short leash to whatever extent is feasible. That fails completely when you're dealing with an actual government at any scale bigger than a small village or HOA.
> Oxide employees bear responsibility for the artifacts we create, whatever automation we might employ to create them.
Yes, allow the use of LLMs, encourage your employees to use them to move faster by rewarding "performance" regardless of risks, but make sure to place responsibility of failure upon them so that when it happens, the company culture should not be blamed.
Airbus is not immune to design & manufacturing issues with fatal consequences, they’re just not too-of-mind these days. A similar issue seems to have ‘cropped up’ on this flight: https://en.wikipedia.org/wiki/Qantas_Flight_72
> Temporary inconsistency between the measured speeds, likely as a result of the obstruction of the pitot tubes by ice crystals, caused autopilot disconnection and [flight control mode] reconfiguration to "alternate law (ALT)".
- The crew made inappropriate control inputs that destabilized the flight path.
- The crew failed to follow appropriate procedure for loss of displayed airspeed information.
- The crew were late in identifying and correcting the deviation from the flight path.
- The crew lacked understanding of the approach to stall.
- The crew failed to recognize the aircraft had stalled, and consequently did not make inputs that would have made recovering from the stall possible.
It's often easy to blame the humans in the loop, but if the UX is poor or the procedures too complicated, then it's a systems fault even if the humans technically didn't "follow procedure".
Both unsophisticated lay observers and capital/owners tend to fault operators ... for different reasons.
Accident studies and, in particular, books like _Normal Accidents_[1] push back on this assumptions:
"... It made the case for examining technological failures as the product of highly interacting systems, and highlighted organizational and management factors as the main causes of failures. Technological disasters could no longer be ascribed to isolated equipment malfunction, operator error, or acts of God."
It is well accepted - and I believe - that there were a multitude of operator errors during the Air France 447 flight but none of them were unpredictable or exotic and the system they were tasked with operating was poorly designed and unhelpfully hid layers of complexity that suddenly re-emerged during tremendous "production pressure".
But don't take my word for it - I appeal to authority[2]:
"Automation dependent pilots allowed their airplanes to get much closer to the edge of the envelope than they should have ..."[3].
or:
@ 14:15: "... we see automation dependent crews, lacking confidence in their own ability to fly an airplane are turning to ther autopilot ..."[4].
The relief second officer basically pulled up when the stall protection had been disabled and by the time the other pilot and captain realized what was happening it was too late to save the plane.
There is a design flaw though: the sidesticks in modern Airbus planes are independent, so the other pilot didn’t get any tactile feedback when the second officer was pulling back.
You do get an audible "DUAL INPUT DUAL INPUT" warning and some lights though [1]. It is never allowable to make sidestick inputs unless you are the single designated "pilot flying", but people can sometimes break down under stress of course.
The reality is that CRM is still the most important factor required to have a reasonable chance of turning what would otherwise be a catastrophic aviation incident into something that people walk away from. Systems do fail, when they do it's up to the crew to enact memory items as quickly as possible and communicate with each other like they are trained to.
Unfortunately, sometimes they also fail in ways that even a trained crew isn't able to recover the aircraft. That could be a failure that wasn't anticipated, training that was inadequate, design flaws, the human element, you name it. Actions of the crew being put in an accident report isn't an assignment of blame, it's a statement of facts - the recommendations that come from those facts are all that matters.
This is one of those situations where I think it'd be fun to be a flight simulator "operator". Finding new ways to cause pilots to figure out how to overcome whatever the plane is doing to them. Any pilot that ever comes out of a simulator thinking "like that would ever happen" instead of "that was an interesting situation to keep in mind as possible" should have their wings clipped.
Taking a grain of salt since it's from a movie, but one of the things about Sully setting the plane down in the river was due to his experience of not just the aircraft itself but also situation awareness to realize he was too low to safely divert to an airport. He instinctually "skipped" several steps in the procedures to engage the APU which turned out to be pretty key. The intimated thing being that the procedure was so long that they might not have gotten to the APU in time going step-by-step.
Faulting the crew is a common thing in almost all air incidents. In this case the crew absolutely could have saved the plane, but the plane did not help them at all.
Part of the sales pitch of the Airbus is that the computer does A LOT of handholding for the pilots. In many configurations, including the one that the plane was flying in at the start of the incident, the inputs that caused the crash would have been harmless.
In that incident the airspeed feed was lost to the computer and it literally changed the flight controls and turned off the safety limits, and none of the three people in the cockpit noticed. When an Airbus changes flight control modes, it does not keep inputs idempotent. Something harmless under one set of "laws" could crash the plane under another set of laws. In this case, what the pilot with the working control stick was doing would not have caused a crash, except that the computer had taken off the training wheels without anyone noticing.
As a result of changing the primary controls one pilot was able to unintentionally place the plane in an unrecoverable state without the other pilots even noticing that he was making control inputs.
Tack on that the computer intentionally disregarded the stall warning emanating from the AOA sensor as erroneous at a certain point and did not alert the pilots that the plane was stalled. You are taught from day one of flight training that if you hear the stall alarm you push the power in, and push the nose down until the alarm stops. In this case the stall warning came on, and then as the stall got worse, it turned itself off, with the computer under the mistaken belief that the plane could not actually be that far stalled. So the one alarm that they are trained to respond to in a certain way to recover the plane from a stall was silenced. If I was flying and I heard the stall alarm, then heard it stop, I would assume that I was no longer stalled, not that the plane was so far stalled that the stall alarm was convinced it had broken itself.
So yes, the pilots flew the aircraft into the ground, but the computer suffered a partial failure and then changed how the primary flight controls operated.
Imagine if the brake pedal, steering wheel, and accelerator all started responding to inputs differently when your car had a sensor issue. That causes the cruise control to fail. Add in that the cruise control failure turns off ABS, auto-brakes, lane assist, and stability control for some reason. Oh yeah, there's a steering control on the other side of the car on the armrest and the person sitting there can now make steering inputs, but it won't give feedback in your steering wheel, and also your steering wheel still can be manipulated when the other guy is steering, but it is completely disconnected from the tires while the other guy is steering. All of the controls are also more sensitive now, and allow you to do things that wouldn't have been possible a few seconds ago. Also, its a storm in the middle of the night, so you don't have a good visual reference for speed. So now your car is slipping, at night, in a storm, lights are flashing everywhere, nothing makes sense since the instruments are not reading correctly. However, the car is working exactly as described in the manual. When the car ends up in a ditch, the investigation will find that the cause of the crash was driver error since the car was operating exactly as it was designed.
Worth noting that Boeing (and just about every other aircraft on earth) has linked flight controls between the two pilot's positions that always behave in the exact same way so this type of failure could have never happened on a 737 for example.
At the end of the day, this was pilot error, but more in a "You're holding it wrong, I didn't design it wrong" kind of way. After all, there were three people with a combined 20k flying hours, including thousand of hours in that design.
If three extremely qualified pilots that have literal years of experience in that cockpit, who are rigorously trained and tested on a regular basis for emergencies in that cockpit, can fly the thing into the ground due to a cascade from a single human error... maybe the design of the user interface needs a look.
You also conveniently skipped over the parts of the wikipedia article where they charged the manufacturer with manslaughter, and documented dozens of similar incidents, and the entire section outlining the Human Computer Interface concerns.
No, that would be https://1000hv.seiya.me/en/
Easy mistake to make though, it is the 3rd such guide being posted in a week, definitely seems like the subject has a lot of traction.
Not all devs have very deep knowledge of the ecosystem, some will get to use a tool only if it's part of the default set of tools they're provided.
Plus, it saves on memorizing a name if you only use python once in a long while.
There just isn't much of a reason not to.
I fully expect that the intention is to force you into opening up the laptop to install the RAM. RAM is so easy to install that there's basically no risk of the customer messing up, and it exposes you to how easy it is to open up your laptop and how high quality the build is. Worked very well for me, I knew I would not accept buying anything of a lower standard before I even powered it on for the first time.
If it's actually an advertising expense then it's likely priced wrong. They should give the real price in the business section (where people don't want to have to install because they are buying multiples of them and where that price is obscured from the consumer) and have an even higher price for the fully assembled one (and a bit lower price for the unassembled one). Now if 80 % plus of their consumer business is the diy one right now then I'm wrong, but I doubt I am.
Shale oil is extremely unreliable, and its producers have barely ever been able to make any margins out of it because the required investments are heavy and constants. Wells reach maturity after months or weeks, and then they need to be replaced. If we have not yet reached peak all oils, then it will probably happen in the coming years.
Conventional has already peaked, and it is possible that even producers will decide to stop extracting nonconventional oil because it is too expensive and the price of a barrel fluctuates too much (IEA 2020)
Read it too, and I had a similar feeling. To me it was the thought that we will probably never see a place like Bell labs -a temple to knowledge, to gather great minds and let them work on whatever they think might have interesting outcomes, no matter how long it takes to obtain results and without having to worry about short-term financial issues.
Now researchers -in my country anyways- are forced into mostly researching ways to obtain funding and doing a little bit of actual research, almost as a side gig.
Google had (has?) a similar platform, but it had nowhere near the same success as Bell Labs did. They did launch some products but a lot of them failed.
reply