This is a necessity as long as latencies between the client and server are large enough to be perceptible to a human (i.e. almost always in a non-LAN environment).
[edit]
I also just noticed:
> ...these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
The part about "slow and unreliable internet connections" is not specific to SPAs If anything a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
[edit2]
> If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation.
This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
--
I like the idea of HTMX, but the first half of the article is a silly argument against SPAs. Was the author "cheating" in the second half by transpiling clojure to the JVM? Have they tested their TODO example on old hardware with an unreliable internet connection?
> This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
I agree with everything else you said, but having followed the development of Kotlin/JS and WASM closely I have to disagree with this statement.
JavaScript is a very bad compilation target for any language that wasn't designed with JavaScript's semantics in mind. It can be made to work, but the result is enormous bundle sizes (even by JS standards), difficult sourcemaps, and terrible performance.
WASM has the potential to be great, but to get useful results it's not just a matter of changing the compilation target, there's a lot of work that has to be done to make the experience worthwhile. Rust's wasm_bindgen is a good example: a ton of work has gone into smooth JS interop and DOM manipulation, and all of that has to be done for each language you want to port.
Also, GC'd languages still have a pretty hard time with WASM.
> a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
The word "slow" here is unclear. Thick clients work poorly on low bandwidth connections, as the first load takes too long to download the JS bundle. JS bundles can be crazy big and may get updated regularly. A user may give up waiting. Thin clients may load faster on low bandwidth connections as they can use less javascript (including zero javascript for sites that support progressive enhancement, my favorite as a NoScript user). Both thin and thick clients can use fairly minimal data transfer for follow-up actions. An HTMX patch can be pretty small, although I agree the equivalent JSON would be smaller.
If "slow" means high latency, then you're right, a thick client can let the user interact with local state and the latency is only a concern when state is being synchronized (possibly with a spinner, or in the background while the user does other things).
Unreliable internet is unclear to me. If the download of the JS bundle fails, then the thick client never loads. A long download time may increase the likelihood of that happening. Once both are loaded, the thick client wins as the user can work with local state. Both need to sync state sometimes. The thin client probably needs the user to initiate retry (a poor experience) and the thick client could support retry in the background (although many don't support this).
Fully agree with this comment. Also, client and server state are different: on the client you need only session state relevant to user journey, on server you keep only persistent state and use REST level 3 for the rest.
This is a necessity as long as latencies between the client and server are large enough to be perceptible to a human (i.e. almost always in a non-LAN environment).
[edit]
I also just noticed:
> ...these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.
The part about "slow and unreliable internet connections" is not specific to SPAs If anything a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.
[edit2]
> If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation.
This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...
--
I like the idea of HTMX, but the first half of the article is a silly argument against SPAs. Was the author "cheating" in the second half by transpiling clojure to the JVM? Have they tested their TODO example on old hardware with an unreliable internet connection?