Yeah definitions are tricky. If you saw a house consumed with fire, you might look at the circumstances and conclude that it was likely the offspring of the fire that consumed the house across the street, but there wouldn't be anything about the fire's phenotype that would help you come to that conclusion.
If the flames carried the characteristic shape of their parents fire, and they could be distinguished as not the offspring of some other fire by their features alone, then I'd be arguing that fire is alive.
I feel like I'm at risk of classifying certain periodic crystals as alive here, but they wouldn't meet the thermodynamic requirements that I have in mind (which fire does meet).
Most definitions of life are very arbitrary. When it comes to astrobiology, we mostly look for things that look like us because if we didn't, the search space would be incomprehensibly large and frankly there's not a lot we could say.
How we would call such depends how you define and understand life. If one defines it very widely as a dynamic dissipative process in an open thermodynamic system which organizes matter and energy, reacts to its surroundings, can reproduce itself, may have some symbolic representation of its surroundings, and possibly even care for its reproductions, intelligent life on a star is entirely possible, even if it would be very, very different from us.
If you on the other hand, define it as a set of dissipative structures which are based on organic carbon chemistry and which is able to exist somewhere between -30 and 40 degrees centigrade, replicates itself, use Deoxyribonucleic acid to store generational information, nourish their young by mammary glands, and walk on two legs, we are probably pretty alone in the universe.
Another things to think about, all life forms on our planet are relatives in the sense that we have common ancestors and shared DNA, even such "alien" creatures like tarantulas or centipedes. Given that, I am not entirely sure that it would be pleasant for us to meet technologically superior aliens. We would have to pray that they have much more empathy with other living things than we primates ourselves can usually muster.
"If you on the other hand, define it as a set of dissipative structures which are based on organic carbon chemistry and which is able to exist somewhere between -30 and 40 degrees centigrade, replicates itself, use Deoxyribonucleic acid to store generational information, nourish their young by mammary glands, and walk on two legs, we are probably pretty alone in the universe."
I have heard arguments that all life must be carbon based, and the temperature range is really a proxy argument for life requiring h2o. Everything after dna is clearly a joke.
> The main reason I use Linux is because it's the only platform that actually respects me as user.
Exactly this. And one things proprietary software nowadays sucks big is being economical with my attention and not wasting it. Imagine this: One day, I arrived at work, logged in to the Windows box, opened the web browser to read new mail messages about the thing we were furiously and deeply working and, and what I got next was some in-browser news advertisements from MSN about sexually abused teenagers. You can imagine how hard I cursed. Try to place your boss some yellow press front page over the keyboard while he/she is having their morning coffee, and see what happens.
> While some Linux purists dislike containerized application installation programs such as Flatpak, Snap, and AppImage, developers love them. Why? They make it simple to write applications for Linux that don't need to be tuned just right for all the numerous Linux distributions.
The good thing is that for end users, Guix and Nix (as package managers) do cover exactly the same set of features - but both are much friendlier to developers than containerized apps. And of course they are truly FLOSS and "open source" in that stuff is built from source and that sources are readily available to the users. This matters, since this makes the software more friendly: It is user-friendly because it is written by the users, in difference to a party which has other things as their top priority.
> Genuinely interested: what do people think of their tons of third-party dependencies in Rust?
Here are three things I think, and they have in fact nothing to do with Rust:
1. The easier it is to add dependencies, the more dependencies will be added on average - unless you work purposefully against that.
2. The effect of a rising average number of dependencies in libraries is that their number of dependencies grows as well, and the number of their dependencies' dependencies... up to dependency graphs of several hundred nodes size. Like in exponential growth. An example would be the dependency graph of jquery.
3. I observe this "exponential" growth can have chain-reaction-like effects, like if you have a mass of U235 that achieves critical mass. Below that critical value, some neutrons flying around might trigger a few fissions, but these die out. Above that value, neutrons lead to fissions which lead to more neutrons and so on. The same can happen with complexity in multi-component software. At some point, complexity goes through the roof.
And the latter is especially true if backwards compatibility is not strictly observed, since backwards-incompatible changes tend to be infectuous in that they often make their client components (parents in the dependency graph) backwards-incompatible as well, in other words, there is breakage that propagates up the dependency graph. That breakage might die out and be able to be contained by local fixes, or it might propagate. And once your dependency graph becomes large enough, it is almost guaranteed, that you have breakage.
All these things together is why I believe that systems like NixOS or Guix are the future (but of course there might be other developments in that space).
> All these things together is why I believe that systems like NixOS or Guix are the future (but of course there might be other developments in that space).
That, or actually keeping control over the dependencies?
I think it is always wise to constrain unneeded dependencies. And this counts even more in embedded systems.
But on the other hand, programming artefacts, languages and their library systems compete in terms of features, which will usually lead to an ever growing number of dependencies. At least this is what we observe. Even if not all of this is really necessary, this would probably be hard to reverse in general.
> Which tbh is a bad thing. Just because change a doesn't textually touch change b doesn't mean they don't interact.
A good example for this is code which grabs several locks and different functions have to do that in the same order, or a deadlock will result. A lot of interaction, even if changes might happen in completely different lines.
And I think that's generally true for complex software. Of course it is great if the compiler can prove that there are no data race conditions, but there will always be abstract invariants which have to be met by the changed code. In very complex code, it is essential to be able to do bisecting, and I think that works only if you have a defined linear order of changes in your artefact. Looking at the graphs of changes can only help to understand why some breakage happened, it cannot prevent it.
Is there a deeper reason for that?