Hacker Newsnew | past | comments | ask | show | jobs | submit | senkora's commentslogin

Yep. Ad viewability standards simply require that a video ad was 50% onscreen for a continuous 2 seconds in order for it to count as an impression. Google probably usually gets that even for skippable ads.

> Picture this: an advertiser pays premium rates for space on your site, but their carefully crafted creative sits unseen at the bottom of a page your readers never scroll to. Despite technically delivering the impression you promised, you've essentially sold empty air. This disconnect between ads served and ads seen is why viewability has emerged as the cornerstone metric in digital advertising's maturity.

> Video ads require at least two seconds of continuous play while 50% visible ... These seemingly arbitrary thresholds represent extensive research into human attention patterns.

https://www.playwire.com/blog/ad-viewability


I haven’t read any of the author’s other posts, so I don’t know if he is always this careful, but I do not mind the level of LLM assistance present in this post.

It becomes a problem when it is obvious that the LLM had a much bigger contribution to the writing, which is something that we do see a lot on posts here.

This is the same as LLM-assisted PRs (generally fine) and LLM-authored PRs (harmful).


Yep, might as well go straight to the Mathematical Universe Hypothesis:

> Tegmark's MUH is the hypothesis that our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics — specifically, a mathematical structure. Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world".

https://en.wikipedia.org/wiki/Mathematical_universe_hypothes...


I really like this article.

> One could specify a smallest effect size of interest and compare the plausibility of seeing the reported p-value under that distribution compared to the null distribution. 6 Maier and Lakens (2022) suggest you could do this exercise when planning a test in order to justify your choice of alpha-level

Huh, I’d never thought to do that before. You pretty much have to choose a smallest effect size of interest in order to do a power analysis in the first place, to figure out how many samples to collect, so this is a neat next step to then base significance level off of it.


In a perfect world everybody would be putting careful thought into their desired (acceptable) type I and type II error rates as part of the experimental design process before they ever collected any data.

Given rampant incentive misalignments (the goal in academic research is often to publish something as much as—or more than—to discover truth), having fixed significance levels as standards across whole fields may be superior in practice.


The real problem is that you very often don't have any idea about what your data are going to look like before you collect them; type 1/2 errors depend a lot on how big the sources of variance in your data are. Even a really simple case -- e.g. do students randomly assigned to AM vs PM sessions of a class score better on exams? -- has a lot of unknown parameters: variance of exam scores, variance in baseline student ability, variance of rate of change in score across the semester, can you approximate scores as gaussian or do you need beta, ordinal, or some other model, etc.

Usually you have to go collect data first, then analyze it, then (in an ideal world where science is well-incentivized) replicate your own analysis in a second wave of data collection doing everything exactly the same. Psychology has actually gotten to a point where this is mostly how it works; many other fields have not.



He doesn’t mention it in this post, but in another post he talked about the toll of needing to frequently attend meetings in the middle of the night in his time zone.

Whatever his reasons for leaving, I hope that he finds a better balance in his new role.


This was the takeaway I had taking to a colleague about his time at Intel - they're a genuinely global company with engineering teams in practically all time zones who are still expected to collaborate with each other. No matter what time of day the meeting was scheduled for, it was the middle of the night for somebody, and no, just working on written docs async for everything didn't cut it, and they couldn't just fly people out all the time. So that's apparently just part of what it means to take a job at Intel these days.


You may be interested in Dusk OS, the 32 bit Forth based operating system for the first stage of civilizational collapse: https://duskos.org/


Thanks for the mention. The philosophy behind Dusk is also eerily relevant to Chuck's problem at the moment. To quote my own manifest[1]:

  When you operate a system, there is no problem that can arise that will make you powerless. Sure, you can have a hardware failure that hopelessly breaks your system, but at least you'll be able to identify that failure and know for sure that there is no software solution or workaround. That's control.
In this situation, of course Windows is to blame. But it could also happen with Linux, even if it's to a much much lesser degree.

If an update breaks your software in a way that is obscure enough to break only your software, then nobody else will fix your problem, and the system as a whole is too complex for you to dive in, making you powerless.

[1]: https://duskos.org/who.html


That is super cool.


Also featured in a Tom Scott video (which used TFA as its source): https://youtu.be/Ef93WmlEho0?si=4cWrnzKMTq04hDIh


It looks like the initial post that this was a response to ended up flagged.

I don’t mean to accuse you of anything, especially since the signs are relatively subtle here, but this post and the initial one both show signs of having been edited by AI.

As a cultural matter, HN prefers you to not do that. It would be much better to write your posts using your own voice.


The article covers a variety of different approaches to dealing with high cocoa prices, but the Amsterdam brownie in the title is using a more heavily alkalized cocoa powder to maintain a similar taste while using less cocoa:

> The former gets its punch from using more heavily “dutched,” or alkalized, cocoa. It’s also what made that magical brownie taste so chocolatey.


If you buy dutch chocolate for baking you are told that this is actually significantly less flavorful, but a darker color, and useful for eg dark baked goods, when you're mostly trying to create a certain color shade without adding a ton of chocolate flavor.


It's inaccurate to say that dutched cocoa is less flavorful. Chocolate flavor is composed of multiple notes / experiences. Dutched cocoa has less acidic or fruity notes, but more dark rich, fudgy notes. Different, not less flavorful.


Signature taste if Oreos


Sold. I'm an Oreo slut.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: