Grammar changes, in particular things used everywhere like function invocations, have to be worth paying the price for changing/adding new rules. The benefits of fewer characters and more explicit intention weren't enough to outweigh the costs.
There were other considerations: Do linters prefer one syntax to another? Does the name refer to the parameter or the argument in tooling? Should users feel pressure to name local variables the same as the function's parameters? What about more shorthand for common cases like func(x=self.x, y=self.y)?
I personally did not like the func(x=, y=) syntax. I think their example of Ruby's func(x:, y:) would actually make more sense, since it's syntax that would read less like "x equals nothing", and more "this is special syntax for passing arguments".
Guess I will chime in for the dissent: there is already so much Python language. The more special gotchas that are added just for these minor wins is not worth it. In fact, if I ever saw a function that was foo(bar=) I would correct that to be explicit immediately. ‘import this’ and all that.
Said as a curmudgeon that has never used a walrus.
I'm reminded of the SMBC comic https://www.smbc-comics.com/?id=2722 which resonated with me. If it takes ~7 years to master something, you should dedicate yourself to becoming good at it. Or at least you don't have to tie your identity to what you do you right now; you can reinvent yourself and experience more from of life, but you have to give yourself the time to do so.
It's been almost 14 years since that was published, so maybe some self-reflection is due.
As others have said, Rust's ownership model prevents data races, as it can prove that references to mutable data can only be created if there are no other references to that data. Safe Rust also prevents use-after-free, double-free, and use-uninitialized errors.
Does not prevent memory leaks or deadlocks.
It's easy to write code that deadlocks; make two shared pointers to the same mutex, then lock them both. This compiles without warnings:
let mutex_a = Arc::new(Mutex::new(0_u32));
let mutex_b = mutex_a.clone();
let a = mutex_a.lock().unwrap();
let b = mutex_b.lock().unwrap();
println!("{}", *a + *b);
In terms of the math: the table legs are assumed to be equal length, and the wobble is caused by variations of the surface. Specifically the feet of the table are in the same plane. So you could rotate your mathematical table until all feet are secure on the plane, then cut the legs to make the top flat again (legs will not be same length, but top and bottom remain planes).
As for @Cerium's real-life usage, you have possibility of uneven legs and uneven floor (and discontinuities, like a raised floorboard) so it's obviously not guaranteed, but if the floor is warped and smooth enough, you can try.
Looks good. One note about the video: you mention you recently watched a good video on 1-bit sound, and suggest it's worth a watch, but it doesn't appear to be linked-to in the description.
What makes you think it's not compressed? (or that the data is stored as XML?)
There's very sophisticated compression systems throughout each experiment's data acquisition pipelines. For example this paper[1] describes the ALICE experiment's system for Run3, involving FPGAs and GPUs to be able to handle 3.5TB/s from all the detectors. This one [2] outlines how HL-LHC & CMS use neural networks to fine tune compression algorithms on a per-detector basis.
Not to mention your standard data files are ROOT TFiles with TTrees which store arrays of compressed objects.
The 'uncompressed' stream has already been winnowed down substantially: there's a lot of processing that happens on the detectors themselves to decide what data is worth even sending off the board. The math for the raw detectors is 100 million channels of data (not sure how many per detector, but there's a lot of them stacked around the collision) sampling at 40Mhz (which is how often the bunches of accelerater particles cross). Even with just 2 bits per sample, that's 1PB/sec. But most of that is obviously uninteresting and so doesn't even get transmitted.
I was attempting to solve this very problem in the Rust BigDecimal crate this weekend. Is it better to just let it crash with an out of memory error, or have a compile-time constant limit (I was thinking ~8 billion digits) and panic if any operation would exceed that limit with a more specific error-message (does that mean it's no longer arbitrary-precision?). Or keep some kind of overflow-state/nan, but then the complexity is shifted into checking for NaNs, which I've been trying to avoid.
Sounds like Haskell made the right call: put warnings in the docs and steer the user in the right direction. Keeps implementation simple and users in control.
To the point of the article, serde_json support is improving in the next version of BigDecimal, so you'll be able to decorate your BigDecimal fields and it'll parse numeric fields from the JSON source, rather than json -> f64 -> BigDecimal.
Serde has an interface that allows failing. That one should fail. There is also another that panics, and AFAIK it will automatically panic on any parser that fails.
Do not try to handle huge values, do not pretend your parser is total, and do not pretend it's a correct value.
If you want to create an specialized parser that handles huge numbers, that's great. But any general one must fail on them.
This isn't about parsing so much as letting the users do "dangerous" math operations. The obvious one is diving by zero, but when the library offers arbitrary precision, addition becomes dangerous with regard to allocating all the digits between a small and large value
It's tough to know where to draw the lines between "safety", "speed", and "functionality" for the user.
[EDIT]: Oh I see, fix the parser to disallow such large numbers from entering the system in the first place, then you don't have to worry about adding them together. Yeah that could be a good first step towards safety. Though, I don't know how to parametrize the serde call.
If you are using a library with this kind of number representation, computing any rational number with a repeating decimal representation will use up all your memory. 1/3=0.33333… It will keep allocating memory to store infinite copies of the digit 3. (In practice it stores it using binary representation but you get the idea.)
For the Rust crate, there is already an arbitrary limit (defaults to 100 digits) for "unbounded operations" like square_root, inverting, division. That's a compile time constant. And there's a Context object for runtime-configuration you can set with a precision (stop after `prec` digits).
But for addition, the idea is to give the complete number if you do `a + b`, otherwise you could use the context to keep the numbers within your `ctx.add(a, b)`. But after the discussions here, maybe this is too unsafe... and it should use the default precision (or a slightly larger one) in the name of safety? With a compile time flag to disable it? hmm...
I'd strongly recommend against this default - it's a major blocker for using the Haskell library with web APIs as it transforms JSON RPC into into readily available denial of service attacks.
8 billion digits (~100 bits?) is far more than should be used.
Would it possible to use const generics to expose a `BigDecimal<N>` or `BigDecimal<MinExp, MaxExp, Precision>` type with bounded precision for serde, and disallow this unsafe `BigDecimal` entirely?
If not, I expect BigDecimal will be flagged in a CVE in the near future for causing a denial of service.
I think that's the use-case for the rust_decimal crate, which is a 96-bit floating number (~28 decimal digits) which is safer and faster than the bigdecimal crate (which at its heart is a Vec<u64>, unbounded, and geared more for things like calculating sqrt(2) to 10000 places, that kind of thing). Still, people are using it for serialization, and I try to oblige.
Having user-set generic limits would be cool, and something I considered when const generics came out, but there's a lot more work to do on the basics, and I'm worried about making the interface too complicated. (And I don't want to reimplement everything.) D
I also would like a customizable parser struct, with things like localization, allowing grouping-delimiters and such (1_000_000 or 1'000'000 or 10,00,000). That could also return some kind of OutOfRange parsing error to disallow "suspicious" values, out of range. I'm not sure how that to make that generic with the serde parser, but I may some safe limits to the auto serialization code.
Especially with JSON, I'd expect there's only two kinds of numbers: normal "human" numbers, and exploit attempts.
I think Haskell's warning-in-the-doc approach is not strong enough. I'd be in favor of distinguishing small and huge values using the type system. Have a Rust enum that contains either a small-ish number (the absolute value being 10^100 or less, but the threshold should be configurable preferably as a type parameter) or a huge number. Then the user will be required to handle it. Most of the time the user does not want huge numbers, so they will fail the parse explicitly when they do a match and find it.
I don't think there is any "sensible limit" which is big enough for everyone's needs, but low enough you won't blow out memory.
An 8 billion digit number is 2.5G? (Did I do my maths right?) All I need to do is shove 1,000 of those in a JSON array, and I'll cause an out-of-memory anyway.
On the other hand, any limit low enough that I can't blow up memory by making an array of 100K or so is going to be too low for some people (including me, I often make numbers of low-million numbers of digits).
Providing some method of putting a limit on seems sensible, but maybe just make a LimitedBigDecimal type, so then through the whole program there is a limit on how much memory BigDecimals can take up? (I haven't looked at the library in detail, sorry).
If I understand the situation correctly, in Haskell an unbounded number is the default that you get if you do something similar to JSON.parse(mystr). That means you can have issues basically anywhere. Whereas in Rust with Serde you would only get an unbounded number if you explicitly ask for one. That's a pretty major difference. Only a small number of places will explicitly ask for BigDecimal, and in those cases they probably want an actual unbounded number. And they should be prepared to deal with the consequences of that.
Nope you didn't understand the situation correctly. First, almost nobody directly parses from a string to JSON AST: people almost always parse into a custom type using either Template Haskell or generics. Second, parsing isn't the issue; doing arithmetic on the number is the issue.
I'd bet they have very similar performance-metrics, but the yield syntax is more extensible (i.e. you're not limited to one expression) and debug-able (you can put breakpoints within the function).
Also the name and the generator is nicer (for some definition of nice):
>>> def square_vals(x : list):
... return (v * v for v in x)
...
>>> square_vals([1,2,3])
<generator object square_vals.<locals>.<genexpr> at 0x786b8511f5e0>
>>> def square_vals_yields(x: list):
... for v in x:
... yield v * v
...
>>> square_vals_yields([1,2,3])
<generator object square_vals_yields at 0x786b851f5ff0>
I think it's more idiomatic to pass generator-comprehensions into functions rather than return them from functions