Feels like a lot of jumping to conclusion in the article. It just assumes the causation like it's been proven.
It also assumes the finding is negative, which is more subtle but similarly problematic. Decline in memory might just mean that our brain is reallocating capacity for something else. It could also mean that the nature of what we're trying to remember has changed and it's now more difficult (e.g. There's more entropy in the data, or the data is changing more often).
I'm not exactly sure why "big" but it's slow because it has worse change tracking and rendering model, which requires you to do more work to figure out what needs to be updated, unless you manually opt-out when you know. Solid, Vue and other signals based frameworks have granular change tracking, so they can skip a lot of that work.
But this mostly applies to subsequent re-renders, while things mentioned in the article are more about initial render, and I'm not exactly sure why does React suffer there. I believe React can't skip VDOM on the server, while Vue or Solid use compiled templates that allow them to skip that and render directly to string, so maybe it's partially that?
JS can be 100x or even 1000x times more expensive to process than images. JS also blocks the main thread, while images can be processed in the background (and on GPU).
This is great write up. I especially appreciate the focus on mobile, because I find it's often overlooked, even though it's dominant device to access the web. The reality of phones is brutal and delivering a good experience for most users in SPA-style archictecture is pretty hard.
"Slowness poisons everything."
Exactly. There's nothing more revealing than seeing your users struggle to use your system, waiting for the content to load, rage clicking while waiting for buttons to react, waiting for the animations to deliver 3 frames in 5 seconds.
Engineering for P75 or P90 device takes a lot of effort, way beyond what frameworks offer you by default. I hope we'll see some more focus on this from the framework side, because I often feel like I have to fight the framework to get decent results - even for something like Vue, which looks pretty great in this comparison.
Yea, this is pretty annoying and not the only problem in this field. There's a bunch of theather or misunderstanding in the marketing space. I feel like marketing people just don't get it. They seem to be hopelessly incapable to accepting that matching people in whatever way possible is the exact practice the laws like GDPR are trying to target. You cannot go around it by hashing, fingerprinting, ad ids, cookieless matching or whatever.
They’re heavily incentivised to not get it, both internally with company KPI’s that’ve not kept pace with the reality of GDPR and externally through ad platforms that continue to demand excessive amounts of data without providing suitable alternatives.
Yea, companies are probably abusing this (as I have noted in the sibling comment), but I think marketers themselves truly don't get it. I've been on the implementation side of this and it's always frustrating debate. It's pretty clear that they just think this is about just picking a different vendor with "GDPR" on list of features, not realizing that the law fundamentaly targets metrics they want to use, and they just cannot do it "the old way" as they are used to.
I don’t think this is a problem limited to marketers to be fair. How many developers are still also building all these data collection and delivery pipelines? They should know better too, no?
I also think that many vendors in this space are abusing the fact that marketers are not technical people, so they just wave around some "we're GDPR ready", "anonymized data" slogans such that marketers feel that they can tick the "GDPR" box and get all the metrics they are used to.
While of course not realising that GDPR implementation is partially on them and that some of those metrics are literally impossible to implement without breaching into GDPR territory. Any company saying that they are "fully GDPR compliant" but also giving you retention and attribution metrics by default is probably confusing you in this way.
I don't know, I run a social media platform for learning and bots are almost never a problem apart from occasional bandwidth spike. Most abuse comes from people, and we've definitely applied the principles from the article, because other way we would just get overmhelmed.
It is very different. I develop and app that has iOS, Android and web version and iOS is by far the most problematic platform in this regard. Takes a few minutes to deploy a fresh web app. It takes a few weeks of being bullied by Apple to ship iOS app, and it's very miserable experience.
I partially agree that the authority has its use and that restricting the freedom to publish apps in some way is reasonable to prevent problems.
The problem with Apple (and to a lesser extent Google) is that it goes way further than that. It dictates what technologies you can use, it dictates a ton of specific rules for how your app should behave, it gatekeeps your bug fixes, it takes an absolutely obnoxious share of your revenue while providing just bare minimum service, with decades old bugs you have to workaround. Many of those things also makes the service worse for their users - it really feels like as a developer for their platform, you're in a hostile relationship with them, and pay for it.
> As Facebook would push for more engagement, some bands would flood their pages with multiple posts per day
The causation is opposite, and it's the whole problem with chronological feeds, including RSS - chronological feeds incentivises spam-posting, posters compete on quantity to get attention. That's one of the main reasons fb and other sites implemented algorithmic feeds in the first place. If you take away the time component, posters compete on quality instead.
> The story we are sold with algorithmic curation is that it adapts to everyone’s taste and interests, but that’s only true until the interests of the advertisers enter the picture.
Yea, exactly, but as emphasized here: The problem is not curation, the problem is the curator. Feed algorithms are important, they solve real problems. I don't think going back to RSS and chronolgical feed is the answer.
I'm thinking of something like "algorithm as a service," which would be aligned with your interests and tuned for your personal goals.
> I'm thinking of something like "algorithm as a service," which would be aligned with your interests and tuned for your personal goals.
RSS is just a protocol. You could make a reader now with any algorithm you want that displays feeds.
In fact, I can’t imagine that no one is using the AI boom to say they will build a decentralized Twitter using rss plus ai for the algorithm
>RSS is just a protocol. You could make a reader now with any algorithm you want that displays feeds.
Your proposal to filter on the client side where the RSS reader runs can't do what the gp wants: algorithmic suggestions on the _server_ side.
The issue of an AI algorithm applied to client-side-RSS is it's limited to the closed set items of the particular feed(s) that RSS happened to download of whatever websites the user pre-defined in the white-list.
Example of inherent client-side limitation would be how Youtube works:
- a particular Youtube channel about power tools : can use RSS to get a feed of that channel. Then use further customized client filtering (local AI LLM) to ignore any videos that talk about politics instead of tools.
- the Youtube algorithm of suggested and related videos to discover unknown channels or topics : RSS can't subscribe to this so there's no ability to filter on the client side. Either Youtube itself would have to offer a "Suggested Videos as RSS feed" -- which it doesn't -- or -- a 3rd party SaaS website has to constantly scrape millions of Youtube videos and then offer it as a RSS feed. That's not realistic as Google would ban that 3rd-party scraper but let's pretend it was allowed... getting millions of XML records to filter it client-side to throw away 99% of it is not ideal. So you're still back to filtering it on the server side to make the RSS feed managable.
In the "explore-vs-exploit" framework, the "explore" phase is more efficiently accomplished with server-side algorithms. The "exploit" phase is where RSS can be used.
- "explore" : use https://youtube.com and its server-side algorithms to navigate billions of videos to find new topics and content creators. Then add interesting channel to RSS whitelist.
- "exploit" : use RSS to get updates of a particular channel
> Example of inherent client-side limitation would be how Youtube works:
> ...
I thought about this problem a long time ago but never did anything substantive with it. I guess I'll articulate it here, off-the-cuff:
People used to post a "blogroll" (and sometimes an OPML file) to their personal blogs describing feeds they followed. That was one way to do decentralized recommendations, albeit manually since there was no well-known URL convention for publishing OPML files. If there was a well-known URL convention for publishing OPML files a client could build a recommendation graph. That would be neat but would only provide feed-level recommendation. Article-level recommendation would be cooler.
One of the various federated/decentralized/whatever-Bluesky-is "modern" re-implementations of Twitter/NNTP could be used to drive article-level recommendations. I could see my feed reader emitting machine-readable recommendation messages based on ratings I give while browsing articles. I would consume these recommendations from others, and then could have lots of fun weighting recommendations based on social graph, algorithmic summary of the article body, etc.
GGP does express interest in Algorithm-as-a-Service (AaaS), but I don't see why AaaS or server-side anything would be required to have non-chronological feed algorithms. Client-side is perfectly suitable for the near-univeral case where feed servers don't overwhelm the client with spam (in which case you remove the offending server from your feed).
To your points about YouTube-style algorithmic discovery, I do agree that that would require the server to do things like you describe. So I think that there could be both client-side and server-side algorithms. In time, who knows? Maybe even some client-server protocol whereby the two could interact.
>, but I don't see why AaaS or server-side anything would be required to have non-chronological feed algorithms.
You assume gp's idea of "non-chronological" feed means taking the already-small-subset-downloaed-by-RSS and running a client-side algorithm on it to re-order it. I'm not debating this point because this scenario is trivial and probably not what the gp is talking about.
I'm saying gp's idea of "non-chronological" feed (where he emphasized "curation is not the problem") means he wants the huge list of interesting but unknown content filtered down into a smaller manageable list that's curated by some ranking/weights.
The only technically feasible way to do curation/filtering algorithm on the unexplored vastness out on the internet -- trillions of pages and petabytes of content -- is on servers. That's the reasonable motivation for why gp wants Algorithm-as-a-Service. The issue is that the companies wealthy enough to run expensive datacenters to do that curation ... want to serve ads.
Maybe you're right about what they meant. I'll not debate that.
I will say that, for my purposes, I would definitely like an RSS reader that has more dynamic feed presentation. Maybe something that could watch and learn my preferences, taking into account engagement, time of day, and any number of other factors.
What's more, with primarily text-oriented articles, the total number of articles can be extremely high before overwhelming the server or the client. And a sufficiently smart client needn't be shy about discarding articles that the user is unlikely to want to read.
Its all subjective. There is no clear quantification of X Attention consumed = Y Value produced. So saying what the algo does is important is like saying astrology is important. Or HN is important ;) At the end of the day most info produced is just entertainment/placebo. 3 inch chimp brains have upper limits on how much they can consume and how many updates to their existing neural net are possible. Since there is nothing signaling these limits to people, people(both producers and consumers of info) live in their own lala land about what their own limits are or when those limits have been crossed, mostly everyone is hallucinating about Value of Info.
The UN report on the Attention Economy says 0.05% of info generated is actually consumed. And that was based on a study 10-15 years ago.
I've patched miniflux with a different sorting algorithm that is less preferential to frequent posters. It did change my experience for the better (though my particular patch is likely not to everyone's taste).
It is a bit strange that RSS readers do not compete on that, and are, generally, not flexible in that respect.
Social media targets engagement, which is not a good target. Even a pure chronological sort is better.
> The causation is opposite, and it's the whole problem with chronological feeds, including RSS - chronological feeds incentivises spam-posting, posters compete on quantity to get attention.
That doesn't make any sense. Quantity might make you more prominent in a unified facebook feed, but an RSS reader will show it like this:
Sam and Fuzzy (5)
Station V3 (128)
They've always displayed that way. You never see one feed mixed into another feed. This problem can't arise in RSS. There is no such incentive. Quantity is a negative thing; when I see that I've missed 128 posts, I'm just going to say "mark all as read" and forget about them. (In fact, I have 174 unread posts in Volokh Conspiracy A† right now. I will not be reading all of those.)
† Volokh Conspiracy is hosted on Reason. Reason provides an official feed at http://reason.com/volokh/atom.xml . But Volokh Conspiracy also provides an independent feed at http://feeds.feedburner.com/volokh/mainfeed . Some of their posts go into one of those feeds, and the rest go into the other. I can't imagine that they do this on purpose, but it is what they do.
> They've always displayed that way. You never see one feed mixed into another feed. This problem can't arise in RSS.
All readers I know have the option to display all feeds chronologically, or an entire folder of feeds chronologically. In most, that's the default setting when you open the app/page.
I always use it like that. If I'd want to see all new posts from a single author, I might as well just bookmark their blog.
If you bookmark Dave's blog, you have to check his blog every day to see if there's something new, even if Dave only posts monthly. Or you check less often, and sometimes discover a new post long after the discussion in the comments has come and gone.
If you put Dave's blog in your RSS reader, one day "Dave (1)" shows up in your list of unread sources and you can read his new post immediately, and you didn't need to think about Dave's blog any other day.
I could use the "all articles" feed in my RSS reader (TT-RSS), but I would never do such a thing unless all the blogs I follow had similar posting frequencies that would mesh well together, which they don't. I never use the front page of Reddit for the same reason: the busy subs would drown out the ones that get a post a week.
> All readers I know have the option to display all feeds chronologically, or an entire folder of feeds chronologically. In most, that's the default setting when you open the app/page.
The option might exist. It was certainly not the default in mainstream readers in the past and it still isn't now. I never encountered it in Google Reader (as mainstream as it gets), or in Yoleo (highly niche), or in Thunderbird (also as mainstream as it gets).
Whether a bunch of unused projects make something strange the default doesn't really have an impact on the user experience. This is not something you can expect to encounter when using RSS.
> If I'd want to see all new posts from a single author, I might as well just bookmark their blog.
That approach will fail for two obvious reasons:
1. The bookmark is not sensitive to new posts. When there is no new post, you have to check it anyway. When there are several new posts, you're likely to overlook some of them.
2. Checking one bookmark is easy; checking 72 bookmarks is not.
It was the default view in Google Reader, the "All Items" view.
A mix of all feeds, ordered chronologically, is the default view in tt-rss, miniflux, inoreader, feedly, netnewswire, and all RSS readers I've ever seen.
> the act of selling something (such as a newspaper column or television series) for publication or broadcast to multiple newspapers, periodicals, websites, stations, etc.
>> the syndication of news articles and video footage
> This article provides a simple guide to using RSS to syndicate Web content.
Note that this is a guide to creating an RSS feed from the publisher's perspective. It is not possible for two feeds to be displayed together, or at all, on the publisher's end. How do you interpret the verb syndicate?
Yea, dismiss a whole argument based on your specific experience with your specific reader and your specific taste. Not to mention your argument proves the point - you already got their attention even when you didn't read the post and even shared the name of the blog here. However is the feed arranged, posters who compete on attention will optimize for it and eventually bubble up. That's why "the algorithms" are complicated in practice, you're always fighting against Goodhart's law.
> I'm thinking of something like "algorithm as a service," which would be aligned with your interests and tuned for your personal goals.
I thought about this back in 2017 (within the context of LinkedIn signal to noise) [1]. I hap hoped for a marketplace/app store for algos. For example:
"What if the filter could classify a given post as 30% advertorial 70% editorial, and what if you could set a threshold for seeing posts of up to 25% advertorial but no more?"
and
"What if the filter could identify that you’d already received 25 permutations of essentially the same thing this month, and handle it accordingly."
> If you take away the time component, posters compete on quality instead.
That is verifiably false simply by looking at the state of social media. What they compete on is engagement bait, and the biggest of them all is rage.
By your logic, social media would be a panacea of quality posts by now, but it’s going to shit with fast-paced lies. Quick dopamine hits prevail, not “quality”.
> I'm thinking of something like "algorithm as a service," which would be aligned with your interests and tuned for your personal goals.
So, another service dedicated to tracking you and mining your data. I can already smell the enshittifaction.
I meant quality as in a sense of quality-vs-quantity dillema, not objective quality. In other words, posters will start optimizing individual posts instead of optimizing their amount.
[edit] and indeed, this only solves the problem of excessive posting, this is just the begining.
I don't necessarily agree with the statement "chronological feeds incentivises spam-posting, posters compete on quantity to get attention" - if someone spam-posts, I am very likely to unsubscribe. This would be true both for chronological and algo feeds.
>> I'm thinking of something like "algorithm as a service," which would be aligned with your interests and tuned for your personal goals.
Now that is something I would be interested in. I believe some of the RSS aggregators are trying to offer this too, but mostly the SaaS ones, not self-hostable open-source ones.
I think these are good points, and also a reason why I never understood people wanted digg and reddit to supply them with RSS feeds back in the heydays of RSS.
That's nonsense. If the problem truly was spam then the "algorithm" would be a simple and transparent penalty proportional to the frequency of posts. The goal is not that (it's """engagement""") and the algorithm is not that either (it's a turbo-charged skinner box attacking your mind with the might of ten thousand data centres).
But that's a bullshit excuse, just like with email the answer to spam posting is the person gets un-followed/unsubbed.
When its an algorithm, the user is incentivized to produce content in order to increase their chances of getting a hit. Secondarily the loss of visibility increases value of advertising on the platform. It's a lose-lose for users, first they are forced to use the platform more for fear of missing something, second the user has to post more to get any reach. The platform wins on increased engagement, overall content depth, ad revenue, and the ability to sneak in a whole lot of shit the user never was interested in or followed. Facebook & Instagram now are functionally high powered spam engines.
Interestingly the FT has an article today about a drop in social media usage ( https://www.ft.com/content/a0724dd9-0346-4df3-80f5-d6572c93a... ) - one chart titled "Social media has become less social" shows a 40% decline since 2014 of people using social media to "share my opinion" and "to keep up with my friends" In many ways, what is being referred to as social media has become anti-social media.
An algorithmic email feed would be useless, as would any sort of instant messenger, yet that's exactly what social media turned in to. Twitter/X is teetering in that direction. The chronological feed still works and is great. Anyone who posts a lot and doesn't balance out the noise with signal I just unfollow.
I might be wrong about this one, but one outcome of generative AI might be an engagement cliff. Some users will be very susceptible to viewing fake photos and videos for hours (the ones still heavily using FB likely are), but others may just choose to mentally disengage and view everything they see on FB, IG, Tiktok as fake.
It also assumes the finding is negative, which is more subtle but similarly problematic. Decline in memory might just mean that our brain is reallocating capacity for something else. It could also mean that the nature of what we're trying to remember has changed and it's now more difficult (e.g. There's more entropy in the data, or the data is changing more often).
Research good, article reasoning sloppy.