IMO understanding this kind of content depends on where you're starting out, or you're getting too much or too little explanation of the supporting components.
Maybe, but it would have surfaced regardless, either directly or through related things. While the transformer may evolve into the next thing, it's equaly likely the next evolution will be unrelated to transformer.
Moreover while the transformers and current LLMs are a leap, the monoculture around them is not necessarily a good thing, defocusing many good researchers from otherwise promising tech.
Finally, cross-polination of ideas is where the magic happens.
On top of that, they face massive lawsuits for violating existing IP etc. It's far from a proven business model indeed (heck, OpenAI isn't even profitable yet), and even further as a technology (because it is so inherently unreliable - no matter the model size).
World impact is not how Nobels are won. If that was the case Elon Musk, Jeff Bezos, Zuckerberg and others would all have multiple ones.
I honestly do not see this paper as being in the same magnitude of brilliance as a typical Nobel would be. Not to mention that it barely counts as science (actually it probably does not). Don't get me wrong. It is a huge achievement for both the machine learning research field and for humanity as a whole, but putting along the achievements of Nobel physicists and such feels wrong.
Very much disagree. Google clearly saw the potential of this as well, and did a ton of work and created a lot of leading models based on this.
The big difference between Google and OpenAI is that Google "had a ton more to lose" so to speak and went forward much more cautiously. See all the hullaballoo they had to deal with e.g. with their "Ethical AI" group and the Timnit Gebru fiasco, as well as cases like where that dim bulb Google employee claimed that LaMDA was sentient. OpenAI, on the other hand, was "full speed ahead" from the get-go.
As a result, many of the top AI researchers left Google. After all, wouldn't you rather work at a place where you could see your work productized as fast as you could build it, rather than at a place where other sizable teams in your company were actively working in an adversarial role to put up roadblocks and vetoes wherever they could?
I still think that the big cos. (Google, MS, Oracle, ...) will win the AI race in the medium/long term as they have a huge momentum behind them, so they didn't really "miss" anything.
ChatGPT is much better than Google Search, though!