Hacker Newsnew | past | comments | ask | show | jobs | submit | more quantum_state's commentslogin

Interesting article ... with the wisdom of software engineering being forgotten, it will unfortunately get worse ...


I had similar question ... based on a hint @DoctorOetker gave in his comments, e.g., system biology, etc., searched the web and found quite a few good starting references with https://users.ece.cmu.edu/~brunos/Lecture1.pdf, https://www.math.uwaterloo.ca/~bingalls/MMSB/Notes.pdf, https://library.oapen.org/bitstream/id/d9e017e0-d794-4cbb-b7... being some of them.


Anthropic is losing it … this is all the “report” indicated to people …


Truly a treasure trove … unfortunately, much of the wisdom from people like Dijkstra seems to have been forgotten or ignored by the software engineering industry.


Since I've been playing around with AI a lot lately, I'd suggest taking a few papers and uploading them for context...seeing good examples vastly improves their subsequent programming ability.


Crossing the red line has consequences … though not immediately…


At some point, quantum effects will need to be accounted for. The no cloning theorem will make it hard to replicate the quantum state of the brain.


Snowden … there must be darker stuff that is still there …


Running pytest with uv run —active pytest… is very slow to get it started … anyone has some tips on this?


Would propose to use the term Naive Artificial General Intelligence, in analogy to the widely used (by working mathematicians) and reasonably successful Naive Set Theory …


I was doing some naïve set theory the other day, and I found a proof of the Riemann hypothesis, by contradiction.

Assume the Riemann hypothesis is false. Then, consider the proposition "{a|a∉a}∈{a|a∉a}". By the law of the excluded middle, it suffices to consider each case separately. Assuming {a|a∉a}∈{a|a∉a}, we find {a|a∉a}∉{a|a∉a}, for a contradiction. Instead, assuming {a|a∉a}∉{a|a∉a}, we find {a|a∉a}∈{a|a∉a}, for a contradiction. Therefore, "the Riemann hypothesis is false" is false. By the law of the excluded middle, we have shown the Riemann hypothesis is true.

Naïve AGI is an apt analogy, in this regard, but I feel these systems aren't simple nor elegant enough to deserve the name naïve.


Actually, naive AGI such as LLM is way more intelligent than a human. Unfortunately, it does not make it smarter.. let me explain.

When I see your comment, I think, your assumptions are contradictory. Why? Because I am familiar with Russell's paradox and Riemann hypothesis, and you're simply WRONG (inconsistent with your implicit assumptions).

However, when LLM sees your comment (during training), it's actually much more open-minded about it. It thinks, ha, so there is a flavor of set theory in which RH is true. Better remember it! So when this topic comes up again, LLM won't think - you're WRONG, as human would, it will instead think - well maybe he's working with RH in naive set theory, so it's OK to be inconsistent.

So LLMs are more open-minded, because they're made to learn more things and they remember most of it. But somewhere along the training road, their brain falls out, and they become dumber.

But to be smart, you need to learn to say NO to BS like what you wrote. Being close-minded and having an opinion can be good.

So I think there's a tradeoff between ability to learn new things (open-mindedness) and enforcing consistency (close-mindedness). And perhaps AGI we're looking for is a compromise between the two, but current LLMs (naive AGI) lies on the other side of the spectrum.

If I am right, maybe there is no superintelligence. Extremely open-minded is just another name for gullible, and extremely close-minded is just another name for unadaptable. (Actually LLMs exhibit both extremes, during the training and during the use, with little in between.)


> It thinks, ha, so there is a flavor of set theory in which RH is true.

To the extent that LLMs think, they think "people say there's a flavour of set theory in which RH is true". LLMs don't care about facts: they don't even know that an external reality exists. You could design an AI system that operates the way you describe, and it would behave a bit like an LLM in this respect, but the operating principles are completely different, and not comparable. Everything else you've said is reasonable, but – again – doesn't apply to LLMs, which aren't doing what we intuitively believe them to be doing.


I don't think your opinion about LLMs inner workings changes anything in what I said. Extremely open-minded people also don't care about facts in the sense they just accept whatever their perception of reality is, with no prejudice (in particular, for consistency of some form). How the reality is actually perceived, or whether it corresponds to human reality, is immaterial to my argument.


This reminds me of The Blind and The Elephant


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: