Separate compilation is one solution to the problem of slow compilation.
Binary patching is another one. It feels a bit messy and I am sceptical that it can be maintained assuming it works at all.
I think a much better approach would be too make the compilers faster.
Why does compiling 1M LOC take more than 1s in unoptimized mode for any language?
My guess is part of blame lies with bloated backends and meta programming (including compile time evaluation, templates, etc.)
Ha, I did not see your post before making mine. You are correct in your assessment of the blame.
Moreover, I view optimization as an anti-pattern in general, especially for a low level language. It is better to directly write the optimal solution and not be dependent on the compiler. If there is a real hotspot that you have identified through profiling and you don't know how to optimize it, then you can run the hotspot through an optimizing compiler and copy what it does.
> Unfortunately you can't really statically link a GUI app.
But is there any fundamental reason why not?
> Also, if you happened to have linked that image to a.out it wouldn't work if
> you're using a kernel from this year, but that's probably not the case ;)
I assume you refer to the retirement of coff support (in favor of elf).
I would argue that given how long this obsolete format was supported was actually quite impressive.
Pros:
* uses Python and recursive descent parsing
* separates front and backend via an IR
* generates ELF binaries (either x86 or ARM)
* meant for real world use
Cons:
* more complex
* not written in a tutorial style
In our ZFS JBOD setup with 90HDDs we scrub regularly and never find checksum errors. Instead, we might get a few recoverable read errors, but more likely, no SMART warnings, just sudden drive failure disappearing from the bus.
I am in a similar situation and my solution was to switch to PWAs.
I translated the apps from Java to Dart and rolled my own UI with straight forward HTML.
My apps do not use notifications which seems to be an issue with PWAs.
A real downside for me is the lack of a simple i18n story and
I will likely roll my own.
On the plus side:
* PWAs can be easily packaged into an APK using:
https://www.pwabuilder.com/
* my apps can now be used on IOs and regular web browsers
Each new decoder represents an increase in attack surface which is why not all video/audio formats supported by ffmpeg are enabled in Chrome.
This is despite Chrome having ffmpeg as a depency.
[caveat: this was the status quo when I last checked a couple of years ago]
The key to unlocking a 10x improvement to compilation speeds will like be
multithreading. I vaguely remember that LLVM struggled with this and I am not sure where it stands today. On the frontend side language (not compiler) design will
affect how well things can be parallelized, e.g. forward declatations probably help, mandatory interprocedural anaylyses probably hurt.
Having said that, we are in a bad shape when golang compiling 40kLOC in 2s is
a celebrated achievement.
Assuming this is single threaded on a 2GHz machine, we
2s * 2GHz / 40kLOC = 100k [cycles] / LOC
That seems like a lot of compute and I do not see how this cannot be improved substantially.
Shameless plug: the Cwerg language (http://cwerg.org) is very focussed on compilation speeds.
I am puzzled that they not already have moved to the web.
Also speaking off the cuff: what are the main reasons for using word documents in
government?
If it is mostly communication with other parts of the government or the public, shouldn't this be email which requires very little functionality compared to word.
I can see niche cases, like laws where you want change tracking or very long
reports but that does not seem to apply to most government employees.
Somehow I feel I missing something big, maybe there is a lot of automation built around word documents?
Binary patching is another one. It feels a bit messy and I am sceptical that it can be maintained assuming it works at all.
I think a much better approach would be too make the compilers faster. Why does compiling 1M LOC take more than 1s in unoptimized mode for any language? My guess is part of blame lies with bloated backends and meta programming (including compile time evaluation, templates, etc.)
reply