I know. Perhaps I should have said "one of those misanthropic online personalities who kills 1000 contributions to open source you never knew you would have had because of their public bad attitude" and the fact that they have supporters is why Facebook wins and open systems lose.
One way to approach this work is to understand the genre of the work you are reading! We can determine genre in a few ways, but in this case, we see that the publication is the New Yorker, which tells us to expect magazine-style writing, specifically longer form feature pieces.
Another important clue is that this is published in the New Yorker's "Books" section, suggesting that this is a book review. And, if you know much about the New Yorker's book reviews, they often include things such as history of the field the book addresses, compares the book to other related books, and what the book's thesis might imply about our world today.
This longer form book review can introduce important context and enrich your understanding of the world! I encourage you to keep an open mind and continue to read pieces that are outside of your usual genre.
You'll be interested to know that Safe Browsing was introduced in Firefox 2 in 2006, and the malware check feature was introduced in 2014! I suggest you search using your favorite search engine to see how this feature works while also preserving the privacy of your URLs and downloads. It uses hashing! Here's a good link, I suggest you scroll down to the Privacy section and read carefully: https://feeding.cloud.geek.nz/posts/how-safe-browsing-works-...
Yes, but the phrasing about being "more proactive" seems to suggest that perhaps this approach has now been adjusted.
However, according to Bugzilla [0] it seems to be about blocking HTTP downloads on all pages compared to previously only blocking HTTP downloads on HTTPS pages, and then someone tweaked the wording to add the sinister-sounding part about being "more proactive".
CZI and CZF are structured as a for-profit LLC and a non-profit arm, respectively. Depending on where the money came from, it might not be a problem at all, though it could potentially jeopardize Harvard's nonprofit status. I'll leave it up to you to figure the odds of the IRS revoking that designation.
Could it just be hidden behind a prompt to enable it on a given site?
I feel like they’re just using fingerprinting as an excuse to not implement functionality that people want. Of course, I don’t really understand the problem space, so it’s likely I’m missing something.
most of the APIs listed there are already gated behind an explicit per-site opt-in in the browsers that implement them, and at least some in the spec defining them
i don't understand how this is a fingerprinting risk either, and i'm pretty sure i'm not missing anything.
> I feel like they’re just using fingerprinting as an excuse to not implement functionality that people want.
Do you actually believe this? Do you not default to more practical explinations like, maybe they don't consider it worthwhile to support because of engineering cost vs people that actually use it?
For example, I'd say 9/10 people I know who aren't tech literate have no idea the Health app exists on their iPhone. This includes people with Apple watches. Similarly it should be obvious that basically nobody ever knew about or used that one feature you liked.
I think engineering/support cost might be an excellent argument against implementing MIDI support in a browser, but the claim I responded to was that MIDI support wasn't on the table due to fingerprinting concerns (which are not obviously well-founded, from my outsider's perspective).
When I said "functionality that people want", I didn't mean to imply that there was a critical mass of people that made MIDI support in a browser obviously worthwhile, I just meant that some people want it and they're being told it won't happen because of fingerprinting.
When you make a web browser engine, you don't get to choose whether a use case is marginal or not (FWIW I use WebMIDI frequently in Chrome). You implement the standards or you perish.
Nobody wants a web browser that "chooses" not to work on some % of websites. Users choose, browsers implement. They are welcome to gate this feature behind a per-site permission prompt if they think it's insecure.
Firefox and other browsers use an optimization called the bfcache (back-forward cache) that is intended to do exactly that. However, lots of web developers write their pages in a way that defeats the bfcache optimization. Moreover, bloated large page sizes (including JS objects) will make the bfcache more likely to evict entries.
Compare using back/forward on a boring HTML site like Hacker News, and to something like say, the Google Search results page. What's funny is that Google itself has a page on how to optimize your website for bfcache implementations, with some Chromium-specific tweaks, but the left hand doesn't talk to the right at Google, so we're stuck with lots of full page refreshes on Google properties.