I'm not so sure it's about knowing his own limitations, rather it's about building a reliable process and trusting that process more than either technology or people.
Any process that relies on 100% accuracy from either people or technology will eventually fail. It's just a basic matter of statistics. However, there are processes that CAN, at least in theory, be 100% effective.
So following that strange logic if a dumb person knows he's dumb, he's suddenly become intelligent? Or is that impossible by your peculiar definition of intelligence?
Wisdom would be knowing not to try and exceed those limits
Intelligence would be knowing they exist (I know that I cannot fly by flapping my arms, it took intelligence to deduce that, wisdom tells me not to try and jump from a height and flap my arms to fly. Further intelligence can be applied, deducing that there are artificial means by which I can attain flight)
Knowing your limits has to be a sign of intelligence.
"Dumb" people (FTR the description actually refers to something rather than that which you think it does...) run around on the internet getting mad because they haven't thought things through...
It's an interesting question though. I know quite some "smart" people who lack self awareness to an almost fatal degree yet can outdo the vast majority of the population at solving logic puzzles. It tends to be a rather frustrating condition to deal with.
I love the simplicity of the System 1/2 breakdown - but is there any actual evidence behind it? It seems like such a classic pop-psychology observational deduction of how something might work with no science to prove it.
In cognitive psychology there's all sorts of evidence that we have two distinct processes, but I don't think anyone has really mapped it to a physical system yet.
Modeling two physical systems is pretty interesting though because dementia ends up looking like a clear failure of System 2. Really neat idea generator even if imperfect.
This is exactly the sort of breaking change that I really struggle to see the value of — maintaining the deprecated method seems incredibly unlikely to be a notable maintenance burden when it is literally just:
Like sure — deprecate it, which might have _some_ downstream cost, rather than having two non-deprecated ways to do the same thing, just to make it clear which one people should be using; but removing it has a much more significant cost on every downstream user, and the cost of maintenance of the old API seems like it should be almost nothing.
(I also don't hate the thought of having a `DeprecationWarning` subclass like `IndefiniteDeprecationWarning` to make it clear that there's no plan to remove the deprecated function, which can thus be ignored/be non-fatal in CI etc.)
There is value for the person maintaining this library cause they want it that way.
If you develop a useful library and give it away for free then all power to you if you want to rearrange the furniture every 6 months. I'll roll with it.
But the fact that they made a new release with it undeprecated shows they _do_ care about their users (direct and indirect), and at least from my point of view (both from the Python ecosystem and the browser ecosystem) this was a pretty foreseeable outcome.
Why can't we predict how big or how often those events would be? We have clear understandings of the distribution of probabilities for all kinds of weather scenarios - see for example 1-50/100/1000 year flood/droughts.
I'm not saying we cannot do it, just that we cannot always get it right, and there is plenty of empirical evidence for that.
The second point is that the distribution has a long tail, especially when we consider the possibility of multiple independent incidents overlapping in time, to the point where it becomes infeasible to suppose that we could be prepared to continue operating as if nothing had happened in all conceivable scenarios, regardless of how accurately we could predict their likelihood.
I do not understand your argument We also cannot get right predicting the failures of fossil fuel generation. Sometimes multiple plants have outages that coincide and we have blackouts. Shit happens, and will continue to happen. Meanwhile we can make statistically rational plans.
We have coal fired plants in Australia with <90% uptime (often unscheduled), but somehow they're considered baseload rather than intermittent.
And I cannot figure out why you are saying this, as nothing I have said previously either contradicts what you say here, or is contradicted by it. If you could say what you think I am saying in my posts in this thread, we can sort it out.
EDIT: I see the problem starts with the first sentence of your first post here: “Why can't we predict how big or how often those events would be?” - which is completely beside the point in my response to rgmerk, who wrote “It's not clear (yet) what a 100% clean energy powered world would use to cover the last couple of percent of demand when loads peak and/or variable generation troughs for extended periods.” My response to this and the follow-up is this: a) if we are talking about two percent, we can overbuild the renewable capacity, and b) if we are considering all eventualities, there inevitably comes a point where we say that we are not going to prepare for uninterrupted service in this event.
No you didn't; you pointed out why it is not, in itself, a significant issue in the first place (which rgmerk tacitly seems to recognize in his first response, through pivoting away from the 2% claim.) My position on this has been that if the issue really is over ~2%, there is a simple solution.
I'll state it plainly: to get to the same level of reliability as the existing grid with just wind, solar, and batteries requires unacceptable amounts of overprovisioning of these at high latitude (or unacceptably high transmission cost).
Fortunately, use of different long duration storage (not batteries) can solve the problem more economically.
"Creative" re/misinterpretation is becoming quite a thing here - what I actually did was agree that rgmerk had a more defensible position after he pivoted away from his original ~2% claim to a more reasonable one.
I'll state it plainly: rgmerk's subsequent pivot in his stated claims does not retroactively make my response to his original claim wrong! (Not even if the subsequent claim more accurately reflects what he really meant to say.) I am having trouble figuring out why anyone would think otherwise.
We can and do, and there are detailed plans based on those weather scenarios (eg for the Australian east coast grid; there is AEMO’s Integrated System Plan).
Things in the US are a bit more of a mixed bag, for better or worse, but there have been studies done that suggest that you can get very high renewables levels cost effectively, but not to 100% without new technology (eg “clean firm” power like geothermal, new nuclear being something other than a clusterfumble, long-term storage like iron-air batteries, etc etc etc).
The best technologies there are (IMO) e-fuels and extremely low capex thermal.
There are interesting engineering problems for sources that are intended to operate very infrequently and at very low capacity factor, as might be needed for covering Dunkleflauten. E-fuels burned with liquid oxygen (and water to reduce temperature) in rocket-like combustors might be better than conventional gas turbines for that.
It's mostly something I thought about myself. The prompting idea was how to massively reduce the capex of a turbine system, even if that increases the marginal cost per kWh when the system is in use, and also the observation of th incredibly high power density of rockets (they're the highest power density heat engines humanity makes). So, get rid of the compressor stage of the turbine, be open cycle so there's no need to condense steam back to water, and operate at higher pressure (at least an order of magnitude higher than combustion turbines) so the entire thing can be smaller.
You'd have to pay for storage of water and LOX (and making the LOX) so this wouldn't make sense to prolonged usage. On the plus side, using pure LOX means no NOx formation, so you also lose the catalytic NOx destruction system a stationary gas turbine would need to treat its exhaust.
I vaguely recall some people in Germany were looking at something like this but I don't remember any details.
Hmm is it actually that bad? Keep in mind r2 is only stored in one region which is chosen when the bucket is first created so that might be what you're seeing
But I've never really looked too closely because I just use it for non-latency critical blob storage
Being aware of one's limitations is the strongest hallmark of intelligence I've come across...
reply