While this is from ByteDance, who also are behind TikTok, this algorithm is likely not the one behind TikTok.
Instead, it is likely a component that powers ByteDance's commercial recommender system solution, which they market to e-commerce companies: https://www.byteplus.com/en/product/recommend
This was mentioned in past discussions of the paper on HN.
And even if aspects of this are used for TikTok:
(a) it would be just one of many components of their recommendation system, and
(b) the TikTok recommendation system has changed a lot during the 2+ years since this has been published.
So take what you see here with a grain of salt. After reading the paper and the code, you will NOT know how TikTok's recommendations work.
There's also a heavy element of manual curation in TikTok. They have people putting their fingers on the scales to decide what content gets promoted. Where are those people, and what's their agenda? Who knows.
Releasing the recommender on Github is a way to try to diffuse that criticism. But it's just one part of the puzzle that is Tiktok's content distribution.
This is true for all social media algorithms. None of them are purely automated and for good reason. You need humans going in and tweaking the outcomes to ensure users have a good experience.
Of course, when the conversation is about TikTok, this often becomes accusations of propaganda.
But YouTube, Facebook, and Twitter all exert significant control over their algorithms and things like their Homepage, Trending Topics, etc. The conservative right often labels such curation as liberal propaganda.
Sure. HN is very actively moderated, and most people here probably agree that it’s worth it. (Those who don’t like it presumably don’t stay here.)
But at the massive scale of Meta or ByteDance, there is a difference between removing problematic content and actively promoting content. They’re two sides of the same coin, but the first is applied based on reactive guidelines (“we’ve previously decided this kind of content shouldn’t be here”) while the second is ultimately an in-the-moment opinion on whether more people should be seeing the content. The line is blurry, but these are not the same thing, and vibes-based content promotion is easier to manipulate.
Are there CCP agents working at ByteDance? Of course there are because it’s practically mandatory — just like American telecom companies have NSA wiretap rooms. Do those CCP agents get consulted on which foreign political candidate should get the viral boost? Perhaps not. But it appears they’ve built a system where this kind of thing is possible and leaves little paper trail because the curated boosting is so integral to the platform.
I explicitly did not mention HackerNews, as the homepage feed is primarily based on user voting - neither algorithms nor chronology. Dang’s moderation is not comparable to other social media platform’s feed curation.
> there is a difference between removing problematic content and actively promoting content
Again, there is sufficient evidence that all major social media platforms do exactly this, not just TikTok. Hence why I said:
>> The conservative right often labels such curation as liberal propaganda.
> where this kind of thing is possible and leaves little paper trail
Could you point to the paper trail that Meta, Google, Twitter provide on their curation actions? Otherwise, this just proves my point that people blindly want to accuse Chinese platforms of shady activities, and Western ones as paragons of virtue.
I really don’t agree with this - chronological sorting favors larger/power users who spam their content through all hours of the day, versus smaller users that you probably care more about.
If you don't have a way to manually push the algo, then you'd never be able to sell features like promoted posts and the like. And why would you not want a feature to sell?
Ads can be inserted into any kind of feed, there is no difference between chronological and algorithmic. It’s usually a simple calculation of target ads displayed per posts viewed, with per user/advertiser caps.