Mozilla also isn't interested in supporting it, it's not just Google. I also often see these articles that tout jpeg-xl's technical advantages, but in my subjective testing with image sizes you would typically see on the web, avif wins every single time. It not only produces fewer artifacts on medium-to-heavily compressed images, but they're also less annoying: minor detail loss and smoothing compared to jpeg-xl's blocking and ringing (in addition to detail loss; basically the same types of artifacts as with the old jpeg).
Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.
Mozilla is more than willing to adopt it. They just won't adopt the C++ implementation. They've already put into writing that they're considering adopting it when the rust implementation is production ready.
The jxl-oxide dev is a jxl-rs dev. jxl-oxide is decode only while jxl-rs is a full encode/decode library.
zune also uses jxl-oxide for decode. zune has an encoder and they are doing great work but their encoder is not threading safe so it's not viable for Mozilla's need.
And there's work already being done for properly integrating jxl implementations with firefox but frankly things take time.
If you are seriously passionate about seeing JPEG-XL in firefox there's a really easy solution. Contribute. More engineering hours put towards a FOSS project tends to see it come to fruition faster.
Seems like the normal usage to me. The post above lists other criteria that have to be satisfied, beyond just being a Rust implementation. That would be the consideration.
Mozilla indicates that they are willing to consider it given various prerequisite. GP translates that to being “more than willing to adopt it”. That is very much not a normal interpretation.
> To address this concern, the team at Google has agreed to apply their subject matter expertise to build a safe, performant, compact, and compatible JPEG-XL decoder in Rust, and integrate this decoder into Firefox. If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.
So you think it's silly to not want to introduce new potentially remotely-exploitable CVEs in one of the most important pieces of software (the web browser) on one's computer? Or are you implying those 100k lines of multithreaded C++ code are bug-free and won't introduce any new CVEs?
> and don’t think that the programmer more than the languages contribute to those problems
This sounds a lot like how I used to think about unit testing and type checking when I was younger and more naive. It also echoes the sentiments of countless craftspeople talking about safety protocols and features before they lost a body part.
Safety features can’t protect you from a bad programmer. But they can go a long way to protect you from the inevitable fallibility of a good programmer.
It's crazy how anti-Rust people think that eliminating 70% of your security bugs[1] by construction just by using a memory-safe language (not even necessarily Rust) is somehow a bad thing or not worth doing.
It's not about being completely bug free. Safe rust is going to be reasonably hardened against exploitable decoder bugs which can be converted into RCEs. A bug in safe rust is going to be a hell of a lot harder to turn into an exploit than a bug in bog standard C++.
> It’s crazy how people think using Rust will magically make your code bug and
vulnerability free
It won't for all code, and not bug-free, but it absolutely does make it possible to write code parsing untrusted input all-but vulnerability free. It's not 100% foolproof but the track record of Rust parsing libraries is night-and-day better than C/C++ libraries in this domain. And they're often faster too.
Multiple severe attacks on browsers over the years have targeted image decoders. Requiring an implementation in a memory safe language seems very reasonable to me, and makes me feel better about using FF.
I did some reading recently, for a benchmark I was setting up, to try and understand what the situation is. It seems things have started changing in the last year or so.
No, the situation about image compression has not changed. The Grand Poster you were replying to was writing about typical web usage, that is "medium-to-heavily compressed images", while your benchmark is about lossless compression.
BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393
No. demetris’ benchmark of lossless image compression is not a sign that the situation may be changing. :-D
That was just the context for some reading I did to understand where we are now.
> BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393
That is one of the links I shared in my comment (along with the bug title in parenthesis). :-)
Same in my experience testing and deploying a few sites that support both. In general the only time AVIF outperformed in file size for me was with laughably low quality settings beyond what any typical user or platform would choose.
And for larger files especially the benefits of actually having progressive decoding, pushed me even more in favour of jpeg-xl. Doubly so when you can just provide variations in image size by halting the bit flow arbitrarily.
JPEG XL seems optimally suited for media and archival purposes and of course this is something you’d want to mostly do through webapps nowadays. Even relatively basic uses cases like Wiki Commons is basically stuck on JPEG for these purposes.
For the same reason it would be good if a future revision of PDF/A would include JPEG XL, since it doesn't really have any decent codecs for low-loss (but not losless) compression (e.g. JPEG sucks at color schematics/drawings and losless is impractically big for them). It did get JP2 but support for that is quite uncommon.
>but in my subjective testing with image sizes you would typically see on the web, avif wins every single time.
What is that in terms of bpp? Because according to Google Chrome 80-85% of we deliver images with bpp of 1.0 or above. I don't think most people realise that.
And in most if not all circumstances, jpeg XL performs better than AVIF at bpp 1.0 and above tested by professionals.
Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.