Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been excited about Lean for years, not because of correctness guarantees, but because it opens the door to doing maths using software development methods.

Libraries of theorems and mathematical objects, with well defined abstractions that are ergonomic to apply in target use cases. Accompanied by good documentation, focused less on how the theorems are proven (how the functions are implemented), and more on what to use them for and how. With proper version control and package management.

I believe that all these practices could vastly improve collaboration and research velocity in maths, as much or more than AI, although they are highly complementary. If maths is coding, AI will be much better at it, and AI will be more applicable to it.





As a a hobbyist mathematician / type theorist, chatgpt et al are great at 'looking up' theorems that you want to exist but that you may not have read about yet. It's also good at connecting disparate areas of math. I don't think lean subsumes AI. Rather, lean allows you to check the AI proof. ChatGPT genuinely does have a knack for certain lines of thought.

LLMs and Lean are orthogonal, neither subsumes either.

They both can be useful or harmful, do to their respective strengths and trade offs.

PAC/statistical learning is good at needles in the haystack problems assuming that the tail losses, simplicity bias, and corpus representation issues are acceptable and you understand that it is fundamentally existential quantification and control for automation bias etc…

Lean is a wonderful collection of concepts and heuristics but due to Rice and Gödel etc… will not solve all problems with software development.

How Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.

It is horses for courses, and remember that even in sub-TC total functional programming, proving and arbitrary functions is very hard, while constructing one is far more tractable.

Even those proofs don’t demonstrate semantic correctness. History is riddled with examples of people using powerful tools that elegantly explain flawed beliefs.

The 2009 crash and gaussian copula as an example.

Get all the value you can out of these tools, but use caution, especially in math, where superficially similar similarities often have conflicting conventions, constraints, and assumptions.

Obviously if you problem is ergotic with the Markov property, both will help, but Automated theorem proving and PAC learning will never be a meta theory of the other IMHO.


> Gödel’s second incompleteness theorem shows that you can prove anything, without that proof being meaningful is a lens into that.

What has Gödel incompleteness to do with that? We can just take any sentence φ as an axiom, and we’ve a trivial proof thereof.


> How Gödel’s second incompleteness theorem shows that you can prove anything,

That is not at all what it says.

> They both can be useful or harmful,

If a proof is admitted into lean, there is no doubt as to its truth. There is no way in which lean can be construed as harmful.

> The 2009 crash and gaussian copula as an example.

There is nothing mathematical about the economics behind the 2009 crash. Such things are statistical measurements, which admit the possibility of failure, not mathematical conclusions that are demonstrably true.


> That is not at all what it says.

Gödel's incompleteness theorems demonstrate that any computable system that is sufficiently powerful, cannot be both consistent and syntactically complete.

Godel's second proved, a formula Con_κ associated with the consistency of κ is unprovable if κ is consistent.

If it is not consistent, Ex falso quodlibet (principle of explosion) applies and finding that contradiction allows any proposition or the negation of that proposition to be proven.

> They both can be useful or harmful

It is not lean that is harmful, mistaking finding a proof as being the same as truth. A proof that verifies a theorem does not have to explain why it holds, and the mathematical assumptions that may have been statistical is exactly why that failed.

Probability theory is just as much of a mathematical branch as λ-calculus. But we probably do differ in opinion on how "demonstrably true" much of mathematics is.

But here is a fairly accessible document related to the crash.

https://samueldwatts.com/wp-content/uploads/2016/08/Watts-Ga...


Out of curiosity, does anyone know the mathematicians actively leaning into AI + Lean?

Terence Tao is well known for being enthusiastic about Lean and AI and he regularly posts about his experiments.

He is also a serious research mathematician at the top of his game, considered by many one of the best mathematicians alive. This might be biased by the fact that he is such a good communicator, he is more visible than other similarly good mathematicians, but he is a Fields medallist all the same.


Kevin Buzzard has been the main mathematician involved with Lean

This is a recent talk where he discusses putting it together with LLMs (he's somewhat sceptical it'll be revolutionary for producing new mathematics any time soon)

https://www.youtube.com/watch?v=K5w7VS2sxD0


I'm leaning a lot into AI + lean. It's a fantastic tool to find new proofs. The extremly rigid nature of lean means you can really check programs for correctness. So that part of AI is solved. The only thing that remains is generating proofs, and that is where there's nothing in AI space right now. As soon as we do get something, our mathematical knowledge is going to explode.

What kind of math do you do, and what would “generating proofs” look like do you think?

i don't know why this was down-voted... i'm genuinely interested in the answers. feel free to dm me.

Terence Tao posts on mathstodon fairly regularly about lean, AI, and math. I'm not going to interpret his posts.

> I've been excited about Lean for years, not because of correctness guarantees, but because it opens the door to doing maths using software development methods.

> Libraries of theorems and mathematical objects, with well defined abstractions that are ergonomic to apply in target use cases. Accompanied by good documentation, focused less on how the theorems are proven (how the functions are implemented), and more on what to use them for and how.

How is any of that different from what we had in math before Lean?


It is more software ish. You don't just have a citation to earlier results, you can import the library. And you don't have to trust collaborators as much, the proof engine validates. And you can use github to coordinate large projects with incremental but public progress.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: