Hacker Newsnew | past | comments | ask | show | jobs | submit | stepanhruda's commentslogin

Is there a way to use this with blosc?


Fighting noise pollution with light pollution


Wrong. Noise affects everyone around. The laser only affects the source of noise.


And the people the helicopter crashes into after the pilot is blinded.


Excessive copter activity is abusive. In an ethical society, it is best followed by action-changing consequences.

The solution must never involve lasers.

Each of these observations stands properly on it's own. If we're making them compete, we've lost the thread somewhere.


I recommend using a mouthwash based on xylitol which kills strains like streptococcus mutans but does not impair these nitrate reducers.


Erythritol and Xylitol do have a strong correlation with severe coronary diseases and risk of strokes under certain preconditions according to newer studies though.

Also not really advisable for people with Irritable Bowel Syndrome.


That’s for actually eating 10-15 grams.

Can’t speak for IBS, ymmv there, but similarly you are rinsing and spitting, basically just microdosing.


I thought these studies often control for additional factors like wealth, education etc. not sure about this one, but genuinely curious whether I am mistaken and science “did not figure it out” yet


Those studies never 'control for those'. What they do is a crude statistical approximation (sometimes extremely crude - eg most studies will 'control for education' by counting up 'years', which equates 4 years at Caltech with 4 years in community college), and hope that not too much leaks through as "residual confounding" (https://journals.plos.org/plosone/article?id=10.1371/journal...). Unfortunately, because everything is correlated, there's residual confounding everywhere. This is why every time someone does a Scandinavian population registry study and compares eg siblings within the same family, often the correlation just disappears then and there.

It's 100% unsurprising that this is true of fitness too. This is what always happens. You look at something like a corporate health fitness plan and you find some correlate even after you 'control for' SES, prior health record etc etc; wonderful! Then you do a randomized experiment and it turns out that the residual confounding was still larger than any causal effect which might be there: https://gwern.net/doc/statistics/causality/2022-wallace.pdf Ah well. Maybe next time you'll manage to 'control for' the confounders...


Diametrically opposed? They are distinct, but hardly opposed.


Well I guess the time scale is what determines the degree to which the distinction becomes opposition. AI is likely to persist for tens or hundreds of thousands of years in some form. Are any of today's nation states built to last that long? I think we all know the answer.

If you have AI which is in the service of an entity which proclaims itself to be the sole franchise of government authority over a given landmass, it is strictly incorrect to say that this AI is "for the country", because it's perfectly plausible (and on sufficiently long time scales, inevitable) that the country will want to evolve, replace, or deprecate that entity.


I agree that “AI for governments” is much more accurate, just saying that diametrically opposed doesn’t really capture the relationship between the two concepts well


It really depends.


Skip level is your manager’s manager, presumably 2 is one further step etc


The only way we can do something is if these actions become deeply unpopular - which they currently aren’t, still about 43% of the population are totally onboard.

All we can do is wait for the empire to tumble enough until people without empathy become personally affected and turn sour. My guess is it won’t take the full 4 years.


A problem is those people would then need to identify their dissatisfaction as a result of these haphazard cuts and policies, which will be directly contradicted by propaganda telling them this is actually proof we need to cut even more.

If the government services are being intentionally crippled so that people are unhappy with them then things are going exactly to plan and won't lead to any mass "realizations" of our mistakes.

Even if the current administration ISNT intentionally trying to cause harm and trauma (despite direct quotes posted in this thread saying they are) then the American people will end up with a choice to either cut the last bits of broken government or pay (way) more taxes and try to rebuild them.

I fear we will collectively pick the former and put our heads in the sand about the damage unregulated private companies will do when put in a position like that (like today's internet utilities, private power companies, the for-profit prison nightmare, health insurance companies denying 90% of claims using software they know is wrong, and the defense mega contractors).

I wonder how far it will go. These folks seem to like following the cliche of taking cautionary tales and reading them like manuals so maybe we'll have snowcrash style corporate 'burbs and private police armies soon.


They don’t hide reasoning output anymore?


I was looking at this the other day. I'm pretty sure OpenAI run the internal reasoning into a model that purges the reasoning and makes it worse to train other models from.

I might be mistaken, but originally the reasoning was fully hidden? Or maybe it was just far more aggressively purged. I agree that today the reasoning output seems higher quality then originally.


You could have another sqlite with this global information related to users / sessions / passwords etc


You either die a hero or live long enough to see yourself become the villain


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: