> One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki):
> In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.
Related: (Also from Wiki):
> Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
I think AlphaFold is where current AI terminology starts breaking down. Because in some real sense, AlphaFold is primarily a statistical model - yes, it's interesting that they developed it using ML techniques, but from the use standpoint it's little different than perturbation based black boxes that were used before that for 20 years.
Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).
The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.
I specifically didn't call LLMs a statistical model - while they technically are, it's obvious they are something more. While intelligence is a hard concept to pin down, current gen LLMs already can do most (knowledge work) based things better than most people (they are better writers than most people, they can program better than most people, they are better at math than most people, have better medical knowledge than most people...). If the human is the mark of intelligence - it has been achieved.
Alphafold is something else though. I work with something similar (specifically FNOs for biophysical simulations) and the insight that data only models perform better than physics based model is novel - I think that the Nobel prize was deservedly awarded - however the thing is still closer to a curve fit than to LLMs regarding intelligence - or in other words, it's about as "intelligence" as permutation based black boxes were.
Related: (Also from Wiki):