Erythritol and Xylitol do have a strong correlation with severe coronary diseases and risk of strokes under certain preconditions according to newer studies though.
Also not really advisable for people with Irritable Bowel Syndrome.
I thought these studies often control for additional factors like wealth, education etc. not sure about this one, but genuinely curious whether I am mistaken and science “did not figure it out” yet
Those studies never 'control for those'. What they do is a crude statistical approximation (sometimes extremely crude - eg most studies will 'control for education' by counting up 'years', which equates 4 years at Caltech with 4 years in community college), and hope that not too much leaks through as "residual confounding" (https://journals.plos.org/plosone/article?id=10.1371/journal...). Unfortunately, because everything is correlated, there's residual confounding everywhere. This is why every time someone does a Scandinavian population registry study and compares eg siblings within the same family, often the correlation just disappears then and there.
It's 100% unsurprising that this is true of fitness too. This is what always happens. You look at something like a corporate health fitness plan and you find some correlate even after you 'control for' SES, prior health record etc etc; wonderful! Then you do a randomized experiment and it turns out that the residual confounding was still larger than any causal effect which might be there: https://gwern.net/doc/statistics/causality/2022-wallace.pdf Ah well. Maybe next time you'll manage to 'control for' the confounders...
Well I guess the time scale is what determines the degree to which the distinction becomes opposition. AI is likely to persist for tens or hundreds of thousands of years in some form. Are any of today's nation states built to last that long? I think we all know the answer.
If you have AI which is in the service of an entity which proclaims itself to be the sole franchise of government authority over a given landmass, it is strictly incorrect to say that this AI is "for the country", because it's perfectly plausible (and on sufficiently long time scales, inevitable) that the country will want to evolve, replace, or deprecate that entity.
I agree that “AI for governments” is much more accurate, just saying that diametrically opposed doesn’t really capture the relationship between the two concepts well
The only way we can do something is if these actions become deeply unpopular - which they currently aren’t, still about 43% of the population are totally onboard.
All we can do is wait for the empire to tumble enough until people without empathy become personally affected and turn sour. My guess is it won’t take the full 4 years.
A problem is those people would then need to identify their dissatisfaction as a result of these haphazard cuts and policies, which will be directly contradicted by propaganda telling them this is actually proof we need to cut even more.
If the government services are being intentionally crippled so that people are unhappy with them then things are going exactly to plan and won't lead to any mass "realizations" of our mistakes.
Even if the current administration ISNT intentionally trying to cause harm and trauma (despite direct quotes posted in this thread saying they are) then the American people will end up with a choice to either cut the last bits of broken government or pay (way) more taxes and try to rebuild them.
I fear we will collectively pick the former and put our heads in the sand about the damage unregulated private companies will do when put in a position like that (like today's internet utilities, private power companies, the for-profit prison nightmare, health insurance companies denying 90% of claims using software they know is wrong, and the defense mega contractors).
I wonder how far it will go. These folks seem to like following the cliche of taking cautionary tales and reading them like manuals so maybe we'll have snowcrash style corporate 'burbs and private police armies soon.
I was looking at this the other day. I'm pretty sure OpenAI run the internal reasoning into a model that purges the reasoning and makes it worse to train other models from.
I might be mistaken, but originally the reasoning was fully hidden? Or maybe it was just far more aggressively purged. I agree that today the reasoning output seems higher quality then originally.