Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How tf else did you honestly expect black-boxes to get built, by self-mangling machine code spit out by a sentient AI god?

Karpathy is bullish on everything bleeding edge, and unfortunately it kinda shows when you know the material better than he does. (source, I've been lecturing on all of it for a few years now). I'm not saying this is bad. It's great to see people who are engaging and bullish, it's better than most futurists waving their hands and going "something, something warp drive".

But when you take a step back and really ask what is going on behind the scenes, all we have is massive statistical tools performing neato tricks at statistical probability to predict patterns. There's no greater understanding or ability to learn or mimic. YET. The transformer for-instance can't easily learn complex mathematical operations. There's a google paper on "learning" multiplication and I know people working on building networks to "learn" sin/cos from scratch. But given these basic limitations and pretty much, every, single, paper, out of Apple "intelligence" crapping on the buzz. We've pretty much hit a limit beyond being the first company to allow for multi-trillion token parsing (or basic, limited, token parsing memory) for companies to capture and retrieve information.





> How tf else did you honestly expect black-boxes to get built, by self-mangling machine code spit out by a sentient AI god?

I'm not quite sure why everyone seems to want the AIs to be writing typescript - that's a language designed for human capabilities, with all the associated downsides.

Why not Prolog? APL? Something with richer primitives and tighter guardrails that is intrinsically hard for humans to wrangle with.


I was wondering about prolog myself and turns out 1) prolog isn’t that amazing in practice (cutting is a skill I never mastered properly) and 2) unification is what type systems do, so in essence typescript et al has kinda-prolog embedded anyway - IOW our wish has always been fulfilled, we just need to squint a bit.

The computers serve us, not the other way around. They have to write in a language that humans can understand.

I get that makes people more comfortable, but if we're truly looking for a blackbox implementation of a spec, they could just as well directly emit something like JVM bytecode, and not worry about silly human needs like linters/formatters/etc

> unfortunately it kinda shows when you know the material better than he does. (source, I've been lecturing on all of it for a few years now)

That source is bearing a lot of weight.


4 years of going through the algebra of back-propagation with maths and physics undergrads, it's not that difficult :). The main challenge is combining it with stats and almost infinite dimensions of freedom which makes implementation extremely painful. hats off to the guys behind pytorch and tf for making it possible without having to rely on minuit or the promises of minuit2

Do you really think he knows that little? I mean fair enough you've been lecturing on it, but he was lecturing a decade ago, at Stanford. Then he took a little break to you know, run AI at Tesla...

> Then he took a little break to you know, run AI at Tesla...

This makes Karpathy look worse, not better.


I didn't say he knows a little, he knows a _lot_ clearly.

I just think he puts on very rose tinted glasses when looking to the future rather than seeing the problems hitting ML model design/implementation now. We had a great leap forward with Attention, it woke an entire industry up by giving them something solid to lean on. But it also highlights we should see a _lot_ more pollination of ideas between maths, sciences, stats and comp-sci rather than re-inventing the wheel in every discipline.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: