My take is to use cultivated insects (Black Soldier Flies), duckweed, and algae as protein feedstock for chickens and fish. Along with more humane husbandry of them it should be an acceptable path for protein for people.
The real question at the core of any production: What's the minimum performance cost we can pay for abstractions that substantially boost development efficiency and maintainability? Just like in other engineering fields, the product tuned to yield the absolute maximum possible value in one attribute makes crippling sacrifices along other axes.
There are two distinct constructs that are referred to using the name variable in computer science:
1) A ‘variable’ is an identifier which is bound to a fixed value by a definition;
2) a ‘variable’ is a memory location, or a higher level approximation abstracting over memory locations, which is set to and may be changed to a value by an assignment;
Both of the above are acceptable uses of the word. I am of the mindset that the non-independent existence of these two meanings in both languages and in discourse are a large and fundamental problem.
I take the position that, inspired by mathematics, a variable should mean #1. Thereby making variables immutably bound to a fixed value. Meaning #2 should have some other name and require explicit use thereof.
From the PLT and Maths background, a mutable variable is somewhat oxymoronic. So, I agree let’s not copy JavaScript, but let’s also not be dismissive of the usage of terminology that has long standing meanings (even when the varied meanings of a single term are quite opposite).
“Immutable” and “variable” generally refers to two different aspects of a variable’s lifetime, and they’re compatible with each other.
In a function f(x), x is a variable because each time f is invoked, a different value can be provided for x. But that variable can be immutable within the body of the function. That’s what’s usually being referred to by “immutable variable”.
This terminology is used across many different languages, and has nothing to do with Javascript specifically. For example, it’s common to describe pure functional languages by saying something like “all variables are immutable” (https://wiki.haskell.org/A_brief_introduction_to_Haskell).
Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location". Probably holder, keeper* or warden would make a more accurate terms using ordinary parlance. Or to be very on point and dropping the ordinariness, there is mneme[1] or mnemon[2].
Good luck propagating ideas, as sound as it might, to a general audience once something is established in some jargon.
> Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location".
No, the term came directly from mathematics, where it had been firmly established by 1700 by people like Fermat, Newton, and Leibniz.
The confusion was introduced when programming languages decided to allow a variable's value to vary not just when a function was called, but during the evaluation of a function. This then creates the need to distinguish between a variable whose value doesn't change during any single evaluation of a function, and one that does change.
As I mentioned, the terms apply to two different aspects of the variable lifecycle, and that's implicitly understood. Saying it's an "oxymoron" is a version of the etymological fallacy that's ignoring the defined meanings of terms.
I think you are confused by terminology here and not by behavior, "immutable variable" is a normal terminology in all languages and could be says to be distinct from constants.
In Rust if you define with "let x = 1;" it's an immutable variable, and same with Kotlin "val x = 1;"
Lore and custom made "immutable variable" some kind of frequent idiomatic parlance, but it’s still an oxymoron in their general accepted isolated meanings.
Neither "let" nor "val[ue]" implies constancy or vacillation in themselves without further context.
Words only have the meaning we give them, and "variable" already has this meaning from mathematics in the sense of x+1=2, x is a variable.
Euler used this terminology, it's not new fangled corruption or anything. I'm not sure it makes too much sense to argue they new languages should use a different terminology than this based on a colloquial/nontechnical interpretation of the word.
I get your point on how the words meanings evolves.
Also it’s fine that anyone name things as it comes to their mind — as long as the other side get what is meant at least, I guess.
On the other it doesn’t hurt much anyone to call an oxymoron thus, or exchange in vacuous manner about terminology or its evolution.
On the specific example you give, I’m not an expert, but it seems dubious to me. In x+1=2, terms like x are called unknowns. Prove me wrong, but I would rather bet that Euler used unknown (quantitas incognita) unless he was specifically discussing variable quantities (quantitas variabilis) to describe, well, quantities that change. Probably he used also French and German equivalents, but if Euler spoke any English that’s not reflected in his publications.
"Damit wird insbesondere zu der interessanten Aufgabe, eine quadratische Gleichung beliebig vieler Variabeln mit algebraischen Zahlencoeffizienten in solchen ganzen oder gebrochenen Zahlen zu lösen, die in dem durch die Coefficienten bestimmten algebraischen Rationalitätsbereiche gelegen sind." - Hilbert, 1900
The use of "variable" to denote an "unknown" is a very old practice that predates computers and programming languages.
I've used JSON as an additional options input to a native-compiled CLI program's various commands because 1) the schema of each option is radically different, 2) they need to be passed most of the way down the call stack easily for each stage of our calculation and report generation.
It works fantastically well, and don't let anyone tell you that you MUST bloat the CLI interface of your program with every possible dial or lever it contains. We should all be cogent of the fact that, in this very young and rapidly evolving profession, textbook and real-world best practice often do not overlap, and are converging and diverging all the time.
This is neat and I wish C3 well. But using Nim has shown me the light on maybe the most important innovation I've seen in a native-compiled systems language: Everything, even heap-allocated data, having value semantics by default.
In Nim, strings and seqs exist on the heap, but are managed by simple value-semantic wrappers on the stack, where the pointer's lifetime is easy to statically analyze. Moves and destroys can be automatic by default. All string ops return string, there are no special derivative types. Seq ops return seq, there are no special derivative types. Do you pay the price of the occasional copy? Yes. But there are opt-in trapdoors to allocate RC- or manually-managed strings and seqs. Otherwise, the default mode of interacting with heap data is an absolute breeze.
For the life of me, I don't know why other languages haven't leaned harder into such a transformative feature.
NOTE: I'm a fan of value semantics, mostly devil's advocate here.
Those implicit copies have downsides that make them a bad fit for various reasons.
Swift doesn't enforce value semantics, but most types in the standard library do follow them (even dictionaries and such), and those types go out of their way to use copy-on-write to try and avoid unnecessary copying as much as possible. Even with that optimization there are too many implicit copies! (it could be argued the copy-on-write makes it worse since it makes it harder to predict when they happen).
Implicit copies of very large datastructures are almost always unwanted, effectively a bug, and having the compiler check this (as in Rust or a C++ type without a copy constructor) can help detect said bugs. It's not all that dissimilar to NULL checking. NULL checking requires lots of extra annoying machinery but it avoids so many bugs it is worthwhile doing.
So you have to have a plan on how to avoid unnecessary copying. "Move-only" types is one way, but then the question is which types do you make move-only? Copying a small vector is usually fine, but a huge one probably not. You have to make the decision for each heap-allocated type if you want it move-only or implicitly copyable (with the caveats above) which is not trivial. You can also add "view" types like slices, but now you need to worry about tracking lifetimes.
For these new C alternative languages, implicit heap copies are a big nono. They have very few implicit calls. There are no destructors, allocators are explicit. Implicit copies could be supported with a default temp allocator that follows a stack discipline, but now you are imposing a specific structure to the temp allocator.
It's not something that can just be added to any language.
And so the size of your data structures matters. I'm processing lots of data frames, but each represents a few dozen kilobytes and, in the worst case, a large composite of data might add up to a couple dozen megabytes. It's running on a server with tons processing and memory to spare. I could force my worst case copying scenario in parallel on each core, and our bottleneck would still be the database hits before it all starts.
It's a tradeoff I am more than willing to take, if it means the processing semantics are basically straight out of the textbook with no extra memory-semantic noise. That textbook clarity is very important to my company's business, more than saving the server a couple hundred milliseconds on a 1-second process that does not have the request volume to justify the savings.
It's not just the size of the data but also the amount of copies. Consider a huge tree structure: even if each node is small, doing individual "malloc-style" allocations for millions of nodes would cause a huge performance hit.
Obviously for your use case it's not a problem but other use cases are a different story. Games in particular are very sensitive to performance spikes. Even a naive tracing GC would do better than hitting such an implicit copy every few frames.
Great! We can start trading with each other in stock notes. I can't wait to buy my next round of groceries with a 0.1 SPY note! The ones I don't use will pay dividends!
If only there were a digital asset that had a fixed supply to prevent inflation that nobody could control who could spend what (to avoid unjust debanking) which was highly divisible so that you could spend large or small amounts, and because it's digital it could be spent very rapidly across long distances using the magic of the Internet. And if only it's governance model wasn't subject to the corruption seen in governments and private banks alike.
I bet that thing would be a pretty useful monetary tool, even if it were attacked, as one might expect by all of the government and banks around the world who were trying to cling to the power they have by virtue of having captured the ability to print money and use it when it is most valuable, fresh off the press.
Positive downstream effect: The way software is built will need to be rethought and improved to utilize efficiencies for stagnating hardware compute. Think of how staggering the step from the start of a console generation to the end used to be. Native-compiled languages have made bounding leaps that might be worth pursuing again.
Alternatively, we'll see a drop in deployment diversity, with more and more functionality shifted to centralised providers that have economies of scale and the resources to optimise.
E.g. IDEs could continue to demand lots of CPU/RAM, and cloud providers are able to deliver that cheaper than a mostly idle desktop.
If that happens, more and more of its functionality will come to rely on having low datacenter latencies, making use on desktops less viable.
Who will realistically be optimising build times for usecases that don't have sub-ms access to build caches, and when those build caches are available, what will stop the median program from having even larger dependency graphs.
I’d feel better about the RAM price spikes if they were caused by a natural disaster and not by Sam Altman buying up 40% of the raw wafer supply, other Big Tech companies buying up RAM, and the RAM oligopoly situation restricting supply.
This will only serve to increase the power of big players who can afford higher component prices (and who, thanks to their oligopoly status, can effectively set the market price for everyone else), while individuals and smaller institutions are forced to either spend more or work with less computing resources.
The optimistic take is that this will force software vendors into shipping more efficient software, but I also agree with this pessimistic take, that companies that can afford inflated prices will take advantage of the situation to pull ahead of competitors who can’t afford tech at inflated prices.
I don’t know what we can do as normal people other than making do with the hardware we have and boycotting Big Tech, though I don’t know how effective the latter is.
> companies that can afford inflated prices will take advantage of the situation to pull ahead of competitors who can't afford tech at inflated tech
These big companies are competing with each other, and they're willing and able to spend much more for compute/RAM than we are.
> I don’t know what we can do as normal people other than making do with the hardware we have and boycotting Big Tech, though I don’t know how effective the latter is.
A few ideas:
* Use/develop/optimise local tooling
* Pool resources with friends/communities towards shared compute.
I hope prices drop sooner than projects dev tools all move to the cloud.
It's not all bad news: as tooling/builds move to the cloud, they'll become available to those that have thus far been unable or unwilling to afford a fast computer to be mostly idle.
This is a loss of autonomy for those who were able to afford such machines though.
reply