It's extremely easy to install. The hard version of the installation is just plugging your phone in Via usb, doing some things with the power and volume rocker buttons, and copy pasting like two or three terminal commands. The easy version is just plugging your phone into your computer and using the single click installer button that works in any Chrome browser.
As for how it works, it's been my daily driver OS on my Google pixel 6 nearly since the pixel 6 came out, and I've never once had it crash on me. Ever. It's never bugged out or needed me to debug or fix or maintain it in any way either. Every app I've ever tried just works on it too, as if I was using stock android, I literally don't even notice the difference honestly. Like sometimes I even forget this isn't what my phone came with. Personally my banking app, discover, works, but I don't know if others would, although I think they probably should since it has Google Play services and the bootloader is locked once you're done installing.
LibreWolf does this, actually: it initially blocks websites from using WebGPU (and canvas) by default and then gives you a popup to grant them permission.
The push to eliminate lifetimes in favor of loans (with this or polonius) in the pursuit of allowing a larger subset of correct programs to be expressed in Rust makes sense at first blush — of course we want to be able to prove more correct programs correct! Who wants to fight the borrow checker all the time over things we know are fine! — but I'm concerned in the long run it will actually turn out to be a bad thing.
I vehemently disagree with the article that lifetimes are at all nebulous or hard to grasp, IMO they're a pretty straightforward concept, and they map really nicely onto single ownership, move, RAII based memory management, and underlying C style memory management concepts, whereas OTOH it feels like loans are perhaps less nebulous, but also map more poorly onto the best comprehensible, and specific ways to think about low level memory management (and also less well to concepts in other low level languages lile C++). Instead of seeing your program as a mostly 1D collection of mostly contiguous scopes that the program counter jumps around in, now you have to view it as a gigantic thicket of constraints.
So in essence, by making Rust's static analysis more powerful and less annoying at first, we're actually making it harder to fully grasp in the long run. It's sort of like the Haskell monad problem — the more powerful you make your compiler/language abstractions to allow proving more code, the less comprehensible everything gets. And I think in both cases, trading some power, past a certain point, for long term comprehensability with more straightforward concepts is better.
I like Rust a lot right now, but with polonius and some of the unnecessary and weird syntactic sugar that's being added, plus the fact that instead of full coroutines to encompass both async and generators (like Kotlin has) we're getting neutered coroutines to do generators and async is a separate but similar concept, I think the Rust designers are making a lot of missteps lately. I know nothing's perfect, but it kind of sucks.
> we're getting neutered coroutines to do generators and async is a separate but similar concept
I'm not sure what's neutered about Rust's current plans for generators, and they aren't separate from async, they're the foundation that async desugars to.
I'm also not sure what your objection is to Polonius, which, so far, is still just a strictly more permissive version of the borrow checker, with nothing new to learn on the user end. AFAICT the notation in this blog post is not actually proposing any new Rust syntax, and is instead proposing syntax in the formal language that is being used internally to model Rust's type system.
> and they aren't separate from async, they're the foundation that async desugars to.
Yeah I just looked it up again and I don't know why I had it in my head that they were separate, you're correct, they are the same thing under the hood, so honestly that eliminates my biggest problem with them.
> 'm also not sure what your objection is to Polonius, which, so far, is still just a strictly more permissive version of the borrow checker, with nothing new to learn on the user end.
The entire model is different under the hood, though, since it switches from lifetimes+borrows to loans, and so in order to fully understand its behavior the user really would have to change their mental model, and as I said above I'm a huge fan of the lifetimes model and less so of the loan model. I just feel like it's much more natural to treat the ownership of a memory object and therefore amount of time in your code that object lives as the fixed point, and borrows as wrong for outliving what they refer to, then to treat borrows as the fixed point, and objects as wrong for going out of scope and being dropped before the borrow ends, because the fundamental memory management model of Rust is single ownership of objects, moves, and scope based RAII via Drop, so the lifetime of an object kind of is the more basic building block of the memory model, with borrows sort of conceptually orbiting around that and naturally being adjusted to fit that, with the checker being a way to force you to adhere to that. The loan based way of thinking would make more sense for an ARC-based language where references actually are more basic because objects really do only live for as long as there are references to them.
> you can't pass values into resume and get them out from the yield statement in the coroutine
I think that the linked comment is out of date, and that this is supported now (hard to tell because it hasn't been close enough to stabilization to be properly documented): https://github.com/rust-lang/rust/pull/68524
As for Polonius changing the underlying mental model, I think this is a natural progression. Rust 1.0 tried to present a simple lexical model of borrowing, and then enough people complained that it has long since replaced the simple model with non-lexical lifetimes in order to trade simplicity for "do what I mean". And since it's not allowed to break any old code, if you want to continue treating borrowing like it has the previous model then that shouldn't present any difficulties.
> They're neutered because they can't suspend and transfer control to a function other than the one that called them ("Note also that "coroutines" here are really "semicoroutines" since they can only yield back to their caller." https://lang-team.rust-lang.org/design_notes/general_corouti...)
Huh? At first glance wanting to do this sounds absolutely insane to me. As in, it sounds a lot like "imagine a function could return not just to its caller function, but to other functions as well! So if you call it, it might be that you won't even end up returning to the same function with which you started, but somewhere completely different".
What am I missing? This sounds absolutely insane at first glance, like a straight-up go-to. What's the most simple use case for these sort of weird yields to other functions?
That's the essence of co-routines vs. sub-routines. If you've ever used a Unix pipe you've used a limited form of coroutines. "ps | grep foo" tells process A, the shell, to fire up processes B (ps) and C (grep), and tells B to send results to C, not its caller A. B runs a bit and yields, returning some data via stdout. C runs a bit, reading the result via stdin, then yields to B and waits for B to return more data. Pipes are actually bidirectional, so its possible for C to send results to B when it yields, but off the top of my head I can't think of a real such example.
I suppose conceptually they're similar but you're oversimplifying how Unix pipes work to the extent that it's apples to oranges in comparison to async/coroutines.
Do you have examples for "weird syntactic sugar"? I don't agree with every syntax decision in Rust, but my grips have nothing to do with syntactic sugar, so I am curious what you mean.
Counter point, my day to day programming is in GC languages. In those, how long something lives is basically anyone's guess, could be the entire length of the program, could be 2 seconds from now.
This isn't something that really causes headaches, it's desirable. So long as when you come in conflict with the borrow checker there's an ability to unwind it, I don't see the harm.
My assumption is the lifetime explanation while incorrect will also still work (I couldn't imagine it wouldn't as that'd break too much). So assuming you do run into these compiler problems, you can still revert to the old simpler mental model and move forward.
Every language I'm aware of has value types (but not user defined value types). Typically numerics or "primitives" are value types in GCed languages.
Those types don't have any sort of explicit lifetime that are different from a regular object type. If you put a `int` on a `Foo` object that `int` lives as long as the `Foo` object does, for example.
Being a value type simply means that the memory representation of the value is used instead of a pointer/object reference.
That means that even when you do have user defined value types the same rules apply. How long these things "live" depends entirely on the context of what they are associated with. If you have a `Bar` value type on a `Foo` object then `Bar` will live as long as `Foo` does, which means till the next garbage collection.
The ones I listed have classical C and C++ like value types available to them, with stack allocation or global memory segments, and mechanisms to do deterministic resource management.
Instead you decided to point out the philosophical meaning of value types.
I think he's pointing out that value types and "does this language require/support explicit lifetime management" are actually unrelated. Fortran has value types but it doesn't have built-in memory management features. Perhaps you could substitute "has heap allocation" for "has value type"?
As I said and covered, value types are not deterministic resource management. Those are orthogonal concepts.
And in fact, confusing the two can lead to problems. C#, for example, does not necessarily store value types on the stack. [1] Those can be allocated on the heap. C# doesn't even give a guarantee of how long a value type will live. It's only guarantee is the one I outlined earlier, that this thing is represented as a block of memory rather than a pointer and that when you pass this around it's done as a copy.
If your assumption is that "this thing is freed up immediately when it's unused" that's a faulty assumption. Being a value type imparts no guarantee on how long something will live.
This is more than just a philosophical definition.
If you want deterministic resource management in C#, you use the `using` statement. If you are in java, it's `try-with-resources`. If you are in C++, you rely on RAII. If you are in C... good luck. None of those things have anything to do with value types. Because lifetime and type aren't related in those languages
> Once you abandon entirely the crazy idea that the type of a value has anything whatsoever to do with the storage, it becomes much easier to reason about it.
This is an ancient article. Pretty much only Rust has .drop() with such strong steroids. Either way struct in C# means something very specific and it is the same as struct in C. You can cast a malloced pointer in C to a struct and use it as such, you can do the same in C# (not that you should, but you can).
In terms for article contents - structs absolutely do go on the stack when you declare them as locals except select scenarios: async and iterator methods, both of which can capture variables that live across yields or awaits into a state machine struct or class (debug/release difference, and also ValueTask which does not alloc when it completes synchronously).
If you care about memory lifetimes, the compiler will helpfully error out when you are returning a 'ref' to a local scope, violating the lifetime (because it has rudimentary lifetime analysis underneath hence scoped and unscoped semantics).
It talks about that spec does not say where structs go. And yet, all existing implementations today have more or less identical behavior (and by all I mean .NET (CoreCLR), Mono (the dotnet/runtime flavour), Mono (the Unity flavour) and .NET Framework.
With that said, only respect to the article's content and, of course, Eric Lippert. For the context of the discussion, however, it may be misleading. C# has gained quite a few low level primitives since it has been written too.
It is no accident that the language designers of D, Chapel, Swift, Haskell, OCaml, Ada, looked at Rust lifetimes and current state of borrow checker and decided, while a good idea, they would rather keep automatic memory management productivity with just enough lifetimes, than the Rust approach.
Also we should take into consideration that while these concepts are somehow hard to use in Rust versus other languages, in Cyclone they are even more complex, and Rust is actually a more ergonomic version already, despite its complexity.
I dream of the day someone takes up the effort and develops a garbage collecting front-end for Rust. The language in itself is really nice, it's functional and has nice algebraic sum types. I also like the syntax a lot. One could perhaps even re-use crates, since they've been proven correct by the rust compiler already.
But then why not use Scala, OCaml, Haskell, etc? Rust is only interesting and novel that it can target the very niche area where GCs are not generally allowed.
Try out Kotlin, or just spend more time with rust to get comfortable with memory management. Once you get used to it, for so many cases, the difference kinda boils down to wrapping certain code in a pair of curly braces to ensure things get dropped.
I use manual lifetimes very infrequently. I honestly used to use them more when trying to represent referenced types in structs, but usually find myself reaching for Arc<Mutex> now.
It seems to redirect you away if you're in Firefox I think? Because I tried to visit it in Firefox on my phone and it instantly redirected me to the about:blank page, but when I opened it in Chromium (also on my phone) it didn't. But I agree, I'm going to skim what he has to say, but the childish and arbitrary redirect thing is not leaving a good first impression at fucking all.
Edit: it's not Firefox. It's if you're coming from Hacker News. Copying and pasting the link into a new tab instead of clicking on the link gets through. Interesting.
All I had to do was right-click and select "view source"? It's not like he minifies his HTML to hell and back like people do on commercial sites, but maybe his build step is mangling his <script> element. That might be why it's only working on Firefox.
Incidentally, he has reasons for not liking people who comment on this site. He explains them in <https://starbreaker.org/blog/they-came-from-hacker-news/>. You need not agree with him, but he isn't being arbitary or childish.
> Incidentally, he has reasons for not liking people who comment on this site. He explains them in <https://starbreaker.org/blog/they-came-from-hacker-news/>. You need not agree with him, but he isn't being arbitary or childish.
I read that and, you know what, fair enough, honestly, heck, look at my profile description — I actually agree with his assessment of Hacker News as a whole (the cryptofascist pseudointellectual groupthink here is incredible, I actually almost quit this site entirely several times in frustration over it). Although it is frustrating to be lumped in with the rest of you when I'm only here because this is the only relatively interesting tech headline aggregator.
This is honestly pretty huge. I'm hoping this will seriously improve the Nvidia situation on the linux desktop in the long run! I'll probably always need the proprietary driver for cuda though
This is a good point actually, despite my reservations about social media, if you want to get a big message out quickly there really is nothing better than something like twitter.
I tend to agree with this sort of. In my opinion, stuff that's more real time, ephemeral, one to one, and focused on closed groups below a certain size, like IRC or Discord, or stuff that is one to many like modern social media but much less highly visible and networked, like the classic blogosphere, tends to be much more healthy and in the long run rewarding then microblogging social media like Twitter or Facebook or Instagram or whatever.
I don't think it's necessarily the anonymity though. Or even the algorithms — the fediverse has no algorithms and yet in my experience (having been a minor player in some big drama there before I left) it's getting just as toxic and judgy as Twitter, maybe even more. I think it's more that in micro blogging social media, because interactions and posts are automatically broadcast to this huge audience that doesnt necessarily share any values or social norms, and are immediately highly discoverable and visible to everyone even outside the people who initially saw it, and these interactions can sort of stay around in the zeitgeist a bit more permanently than an instant message, instead of being ephemeral, in the moment, and directed at one or two or a few people within a closed form community, every post you make and every interaction with people takes on a sort of grandstanding, performing for the crowd, dare I say it virtue signaling (I say this as a leftist lol so you know I'm serious) tenor. It becomes automatically a lot more adversarial and fake and just weird and distorted. And then if you add on top of that the fact that posts and responses are highly asynchronous, so it's actually difficult to feel like you're really having a dialogue with a person, instead of just combatting disembodied words on a screen, and difficult to engage in compassion and quickly correct misunderstandings and respond to feelings in the moment, it means that all of the grandstanding and performing for the crowd and virtue signaling will be that much more dysfunctional and detached from actual human social interaction.
Chronological feed + boosting is an algorithm and it's about as toxic of an algorithm as you could get without making a data set of toxic vs toxic posts.
Yeah, this is precisely what I said as well, being able to comment on anything and have that seen even without a follower count is important for making initial connections on a social network.
Pull not being a good model of two way communications is going to be the major blocker here in my opinion. It's going to mean that people are only going to see comments and reactions on their posts or comments from people they already subscribe to, because their RSS reader would have no possible way of knowing if anyone outside of that list commented, since you can't get notified of content sources you don't already know about, only poll ones you already know. That's already bad enough (one of the big negative things people with large followings on the fediverse talk about is how they can't see what people are saying in the replies to their posts a lot of the time if the servers those people are on are blocked by their server, which means hate and harassment and one sided conversations can fester, and often many commenters can't even see each others' comments, leading to people saying the same things over and over exhaustingly). This also means that people who don't have any followers will literally be essentially muted by default: no one will see their comments or interactions, because no one polls their feed yet, which means that it's basically pointless for them to interact at all, which sounds dispiriting and would probably lead to no one wanting to use this type of social media — moreover, it also creates a catch-22 problem, because a major way to get followers in the first place is to directly interact with other people and bigger blog posts, to make people aware of you and maybe get some of them interested in hearing more of what you have to say, yet in this model, you can't really interact until you have a following already, so your main means of getting a following is gated behind needing a following to work!
In the olden days when bloggers walked the earth, emitting lengthy posts over RSS, they solved this problem in two ways:
Firstly, by appending forms to the end of the post where someone could type out a reply that was more likely to be a few sentences or paragraph, rather than a full-blown essay.
Secondly, by inventing "TrackBack", a standardized way for someone else's blog software to say "hey I wrote some stuff on my blog in response to this post of yours".
Both of these would get appended to the end of the blog post's page as "comments".
This very quickly enabled the new problem of "trackback/comment spam"; the enduring solution in the world of blogs to that has been "Wordpress' Askimet plugin", which is a very centralized piece of the otherwise mostly-distributed infrastructure of RSS-based blogs. I think it's like $15 a year on top of the $60 or so I pay for my Wordpress site on cheap hosting.
"Announced" may be too strong; the link is to an internal email leak discussing this possibility, but "Automattic may be experimenting with selling data to midjourney/openai" is still pretty close to "Automattic's selling user data". Hell, I can see the positive spin blog post announcing this: "They're also giving Automattic a bunch of free cycles on improving Askimet's spam/ham filters, in exchange for a look at every other comment anyone is sending through Askimet ever. We're aware this is a thorny ethical issue; click HERE for the archives of our mailing list dedicated to this project."
As for Palantir, my inner paranoid says that if the FBI/NSA/etc wants this data, they have some way of getting whether or not a deal with a public front like Palantir is involved.
I think it's safe to assume at this point that all public content and most private content on the internet is being fed to both AI and American intelligence services.
Note RSS is an ill-defined polling protocol. The server emits an RSS file which has the top N pieces of content.
All you can do is poll it at a greater or less frequency and hope you don't underpoll or overpoll. (I can easily fetch the RSS feed for an independent blog 1000 times for every time I fetch an HTML page, but should I? What if I wanted to follow 1000 independent blogs?)
With ActivityPub on the other hand you can ask for all updates since the last time you checked so there is a well-defined strategy to keep synced.
There is RFC 5005 - Feed Paging and Arching, but sadly the world of RSS Tools has never been very specification-forward, mostly, because the publishers of RSS feeds are even more desinterested.
ActivityStreams could be seen as a viable extension of RSS (aside from ActivityPub being based off it already) and it does support some simple pagination via its "Collection" vocabulary. Since ActivityStreams is ultimately based on JSON+LD, one could also add seamless querying support to an ActivityStreams endpoint based on SPARQL, for more advanced uses.
Thanks for the advice and compliment! :D I usually write them out and then read them and use the edit function to insert paragraph breaks after the fact, but I forgot to do that this time lol
Yes it is a big hurdle. However, I think content discovery is generally a big part of any content platform, way broader than discovering "who have reacted to my content". Now if you want to solve the problem of content discovery in a broader sense, then you have already fixed this particular shortcoming of pull-model as well. If a service that can inform you about new posts with a particular hashtag, it most probably can also tell you about reactions to a particular post.
And yes, I do realise that such services will tend to not be really decentralised (similar to the relationship of websites and search engines). But that means the downside is not that you don't get such discovery, but that you'll be reliant on more centralised services for such discovery, whereas in the fediverse you would be less reliant on such services for finding out who has commented on your post (though it will, as you've mentioned still not be enough).
> Yes it is a big hurdle. However, I think content discovery is generally a big part of any content platform, way broader than discovering "who have reacted to my content". Now if you want to solve the problem of content discovery in a broader sense, then you have already fixed this particular shortcoming of pull-model as well.
Right but I don't think as a general case finding all RSS feeds on the internet that satisfy a certain criteria, like publishing a hashtag or responding to a particular post, is a problem that can actually be solved in a principled way, because a fundamental limitation of the pull methodology is that you have to know the list of places you are checking beforehand, you can't get content from somewhere you didn't know about prior. The only way to solve this would be to have some kind of crawling and indexing system that regularly crawls the entire internet looking for these expanded RSS feeds and then categorized them according to various criteria in order to poll them. And that is both a very high technical investment and has a lot of limitations itself. So in the end it seems like you haven't really actually distributed the work of a social media system more equally after all, you've just inverted who is doing the work, going from a Federated set of servers that do all the work pushing content everywhere to a Federated set of servers that do all the work pulling content from places.
I do recognise the fact that such "aggregators" would be hugely centralised (if not outright monopolised, like the search engine space). however, maybe I'm wrong but I don't see the federated model succeeding without such services either, so I think of "need for centralised content discovery" as an independent problem, honestly.
I see your negatives as mostly positive. Engagement and virality are inevitably cancerous to any social network, and comment velocity needs to be suppressed and controlled to reduce entropy and limit the degree to which that network can be used by people primarily interested in "getting a following." Any feature (or anti-feature) that makes a platform unattractive to capitalists and influencers and shitposting trolls is a good thing. Discouraging people from posting and commenting is a good thing. Making it difficult to network is a good thing. None of these things need to be impossible, but I do believe there needs to be enough friction to make low hanging fruit and opportunism not worth the effort.
Otherwise everything gets taken over by AI and bots and psychopaths and propagandists and turns to shit.
As for how it works, it's been my daily driver OS on my Google pixel 6 nearly since the pixel 6 came out, and I've never once had it crash on me. Ever. It's never bugged out or needed me to debug or fix or maintain it in any way either. Every app I've ever tried just works on it too, as if I was using stock android, I literally don't even notice the difference honestly. Like sometimes I even forget this isn't what my phone came with. Personally my banking app, discover, works, but I don't know if others would, although I think they probably should since it has Google Play services and the bootloader is locked once you're done installing.