Because arrays simply do not deal with fragmentation. Yes, you could probaly get decent performance on a modern system that has memory overcommit strategy where you could allocate sparse adress ranges where you would probaly never run out of pointers unless you actually write to your variable array.
But its just kind of mediocre and you're better off actually dealing with the stack if you can actually deal with certain fixed sizes.
array-like storage with dynamic size has existed since forever - it's vector. over or undercommitting is a solved problem
VLA is the way to bring that into type system, so that it can be it's own variable or struct member, with compiler auto-magic-ing size reading to access members after it
> auto-magic-ing size reading to access members after it
From the article
>we now have everything we need to calculate the size, offset and alignment of every field, regardless of their positioning in the struct.
>init to allocate the memory
>get to get a pointer to a field
>resize to resize the arrays
>deinit to free the memory
You're now suggesting to do exactly what the article is about without being aware of it.
The other devices don't meet the criteria. Be happy that Pixels are supported, for Google seems to closing down Pixel OS too, making this whole effort rather difficult.
the CONSUMER criteria is "we want better independent security ON DEVICES WE ALREADY OWN"
complaints like in this thread are symptoms of unfullfilled demand - and they can't be solved by saying "oh gosh, what a stupid demand that doesn't agree with our supply"
we: "that's cool, I always wanted to decrease petrol use. But... can you provide an option for some petrol use? It's called hybrid, iirc?"
vendor: "no, our requirements only support 100% electric car. Hybrid cars use petrol and we can't allow that"
we: "suuure, I get that. But the price of electricity here still hasn't come down, everyone already has personal petrol reserves, and your cars are only provided with batteries from congonese child labor mines. Can we pleeease have the half-way option so that I can use less petrol for e.g. small distance travel, but still using petrol for country-sized movement?"
vendor: "no, we only support 100% electric car. Everything smaller is outside our requirements"
Real economics would've provided competition to fullfill demand - but currently Graphene is the only well-known vendor, so complaints will keep coming
C and C++ are about types before names - and modifying that is simply a change for the sake of a change, needless and useless. There's a lot of education enertia behind it - and effort should be spent on fighting what matters (like smart pointers), not hip/non-hip declaration style
decltype(a*b) is the only good "escape hatch"-type excuse for it, but idk why everyone make such a big deal out of it - when was the last time return type was both unpredictable AND needed to be specified? by that point you're already too deep
Note that negative traits are not for "this trait is not implemented" (i.e. a missing `impl Trait for Type`) but instead for "this trait is guaranteed to not be implemented" (i.e. `impl !Trait for Type`)
using BSD/MIT licences is like betting against black swan event
sure, "contributing is cheaper than maintaining a fork" is true most of the time - but the moment new Microsoft comes in with "embrace, extend, extinguish" (or just copy and change), you're doomed
and heck, we had that exact thing happen last autumn, iirc - making big news on this website
But your habitual workflow isn't "doomed". You can always fork and keep using the same open version of the project that you've always used. If the project is popular enough, there's usually a community that keeps maintaining that fork.
That's the deal that you get. Free software was never about "free upgrades forever". It's about the freedom to fork.
Are you seriously trying to imply that the GPL isn't largely about granting you the freedom to fork? Sure, it's also about forcing the copyleft responsibility on you. But come on... That's not even relevant if you don't fork or otherwise depend on the project in the first place
That's true. But at the same time, the risk is kinda overblown. You can still use the last open version of Redis. There's even an open, community-maintained fork that you don't have to maintain yourself.
Even GPL can't force a company to maintain and keep developing an open version when the company doesn't want to. Even if Redis was GPL (no CLA), they could still abandon it and write a compatible clone from scratch. AI makes it even easier to do
Redis has an open fork. Seems "free" enough to me. Companies are not obligated to keep developing the open version forever, anyway. If Redis was GPL, they could've just abandon it and write a compatible clone from scratch. Nowadays, with AI, that even easier to do
I give a concrete example in the GP post but the reason is that the high-speed people can take advantage of you in certain circumstances if you don’t have extremely accurate timing of things like order placement.
As another example, imagine you are placing an options order on one exchange and a cash hedge on another exchange (eg for a delta hedge). If someone sees one half of your order and has faster execution than you, they can trade ahead of you on the other leg of your trade, which increases your execution cost. This is even more important if you’re doing something like an index options trade on one side and the cash basket (all the stocks in the index) on the hedge side.
The fix for this is to use hi-res exchange timestamps (which the exchange gives you on executed trades) to tune a delay on one leg of your execution so both halves hit at precisely the same time. This ensures that HFTs can’t derive an information advantage from seeing one half of your trade before you place the other half of the order.
The mosasaurs also developed a caudal fin (vertical, like sharks and ichthyosaurs, not horizontal like whales and dolphins) and eventually they became quite shark-like, though not as fish-like as ichthyosaurs.
Between ichthyosaurs (who evolved in Triassic) and mosasaurs (who evolved in Cretaceous) there existed also a group of marine crocodiles (Metriorhynchidae; who evolved in Jurassic), which also had caudal fins and were quite shark-like.
So there have been at least 4 groups of amniotes that looked like sharks, cetaceans among mammals and at least 3 groups of diapsids.
However all these marine predators looked like sharks from the point of view of locomotion, but none of them had the kind of teeth specialized for cutting that are characteristic for sharks. Teeth resembling those of sharks are found only among some other fish, e.g. piranhas, whose bodies do not resemble sharks.
I guess we’ll see if they have the endurance. Sharks have been around for a while after all. But, Orca have surpassed sharks in terms of badassery, right? And it isn’t that close. They aren’t evolving into sharks, they are the upgrade!
Beyond ichthyosaurs and cetaceans, carcharhinification (shark-like convergent evolution) also occurred in plesiosaurs, mosasaurs, certain teleost fish like barracuda, and even the extinct thalattosuchian crocodylomorphs.
Jurassic-Cretaceous marine crocodiles (like also ichthyosaurs and mosasaurs) looked like sharks, i.e. they depended mostly on their caudal fin for swimming.
On the other hand, there was no resemblance between sharks and plesiosaurs or pliosaurs. The latter swimmed using their lateral fins, somewhat like marine turtles and penguins, not like sharks. Also the dentition of plesiosaurs and pliosaurs had no similarity to sharks, so except for being big predators there were no resemblances between them.
why not to make it heap-only type? it seems such a useful addition to type system, why ignore it due to one usecase?