>This is a book from 2006 accurately using the salient features of LLMs popularised in the 2020s
I'm a huge fan of Peter Watts' work, but to be fair that is at the very least a close-cousin to the Chinese Room thought experiment that's been around for a while.
Yeah that’s the thing; LLMs like all software are just the latest codification of existing theories and memes, that’s exactly what they’re trained on. That’s exactly the path we envisioned to create them decades ago.
The concept wasn’t new in 2006 either.
Not really seeing the “now that’s sci fi!” emotion here.
Most of our historical compute problems were coupled to network reliability, performance. Those bottlenecks are largely solved; oh wow explosion in AI progress.
This isn’t new it’s more like the Higgs; just needed to wait for hardware.
Yes and no: good science fiction takes a trend and extrapolates. Extrapolation should be based on logic (to be good - but you can also just aim for entertaining and go wild).
The details might differ, but it's worth thinking about whether the implications make sense. Watt's certainly asks a lot of good questions which other orders have also asked - i.e. Accelerando[1] is very interested in the idea of "what happens to mankind when economic systems can no longer be participated in by recognizably human minds?" - taken out to the limits of known physics.
I'm a huge fan of Peter Watts' work, but to be fair that is at the very least a close-cousin to the Chinese Room thought experiment that's been around for a while.