just Scott encoding, Scott-Mogensen refers to a meta encoding of LC in LC. Scott's encoding is fine but requires fixpoint recursion for many operations as you said.
Interestingly though, Mogensen's ternary encoding [1] does not require fixpoint recursion and is the most efficient (wrt being compact) encoding in LC known right now.
> Just use [..], seriously
do you have any further arguments for Scott's encoding? There are many number encodings with constant time predecessor, and with any number requiring O(n) space and `add` being this complex, it becomes quite hard to like
First, if we're talking about practicality, and not just theoretical considerations, then the numbers should not be stored as Peano integers a.k.a. base 1 in the first place. Use lists of 8-tuples of bools or native machine integers or whatever, otherwise, you will suffer from O(n) complexities somewhere in your arithmetic, that's just how base 1 works. Fretting over which particular basic operation has to house this linear-over-logarithmic overhead is IMO unproductive: use better data structures instead of counting sticks.
Second, if we're talking about other data structures, especially recursive ones, e.g. lists, then having easily available (and performant) structural recursion is just more useful than having right folds out of the box: those can be recreated easily, but going in the other direction is much more convoluted.
> then the numbers should not be stored as Peano integers a.k.a. base 1 in the first place
That's my point though. The linked n-ary encoding by Mogensen, for example, does not suffer from such complexities. Depending on the reducer's implementation, (supported) operations on my presented de Bruijn numerals are also sublinear. I doubt 8-tuples of Church booleans would be efficient though - except when letting machine instructions leak into LC.
Though I agree that the focus of functional data structures should lie on embedded folds. Compared to nested Church pairs, folded Church tuples (\cons nil.cons a (cons b nil)) or Church n-tuples (\s.s a b c) should be preferred in many cases.
However, I wouldn't define bruijn as being caramelized just yet. Personally, I view as syntactic sugar only syntax that's expanded to the target language by the parser/compiler. In bruijn's case, there is barely any of such syntax sugar aside of number/string/char encodings. Everything else is part of the infix/prefix/mixfix standard library definitions which get substituted as part of the translation to LC.
I don't know any quadtree/lowlevel encoding of LC that could be memoized like that. Though you could, for example, cache the reduction of any term by hash and substitute the matching hashes with its nf. This doesn't really work for lazy reducers or when you do not reduce strongly. And, of course (same as hashlife), this would use a lot of memory. With a lot more possibilities than GoL per "entity" (NxN grid vs variables/applications/abstractions), there will also be a lot more hash misses.
There's also graph encodings like interaction nets, which have entirely local reduction behavior. Compared to de Bruijn indices, bindings are represented by edges, which makes them more applicable to hash consing. I once spent some time trying to add this kind of memoization but there are some further challenges involved, unfortunately.
Or just (flip .), which also allows ((flip .) .) etc. for further flips.
In Smullyan's "To Mock a Mockingbird", these combinators are described as "cardinal combinator once/twice/etc. removed", where the cardinal combinator itself defines flip.
As a reference on the volume aspect: I have a tiny server where I host some of my git repos. After the fans of my server spun increasingly faster/louder every week, I decided to log the requests [1]. In a single week, ClaudeBot made 2.25M (!) requests (7.55GiB), whereas GoogleBot made only 24 requests (8.37MiB). After installing Anubis the traffic went down to before the AI hype started.
create x = 10;
time point;
print x; //prints 10 in first timeline, and 20 in the next
create traveler = 20;
traveler warps point{
x = traveler;
traveler kills traveler;
};
My unjustified and unscientific opinion is that AI makes you stupid.
That's based solely on my own personal vibes after regularly using LLMs for a while. I became less willing to and capable of thinking critically and carefully.
It also scares me how good they are in appealing and social engineering. They have made me feel good about poor judgment and bad decision at least twice (which I noticed later on, still in time). New, strict system prompt and they give the opposite opinion and recommend against their previous suggestion. They are so good at arguing that they can justify almost anything and make you believe that this is what you should do unless you are among the 1% experts in the topic.
> They are so good at arguing that they can justify almost anything
This honestly just sounds like distilled intelligence. Because a huge pitfall for very intelligent people is that they're really good at convincing themselves of really bad ideas.
That but commoditized en masse to all of humanity will undoubtedly produce tragic results. What an exciting future...
> They are so good at arguing that they can justify almost anything
To sharpen the point a bit, I don't think it's genius "arguing" or logical jujitsu, but some simpler factors:
1. The experience has reached a threshold where we start to anthropomorphize the other end as a person interacting with us.
2. If there were a person, they'd be totally invested in serving you, with nearly unlimited amounts of personal time, attention, and focus given to your questions and requests.
3. The (illusory) entity is intrinsically shameless and appears ever-confident.
Taken together, we start judging the fictional character like a human, and what kind of human would burn hours of their life tirelessly responding and consoling me for no personal gain, never tiring, breaking-character, or expressing any cognitive dissonance? *gasp* They're my friend now and I should trust them. Keeping my guard up is so tiring anyway, so I'm sure anything wrong is either an honest mistake or some kind of misunderstanding on my part, right?
TLDR: It's not not mentat-intelligence or even eloquence, but rather stuff that overlaps with culty indoctrination tricks and con[fidence]-man tactics.
AI being used to completely off load thinking is a total misuse of the technology.
But at the same time that this technology can seemingly be misused and cause really psychological harm is kind of a new thing it feels like. Right? Like there are reports of AI Psychosis, don't know how real it is, but if it's real I don't know any other tool that's really produced that kind of side effect.
We can talk a lot about how a tool should be used and how best to use it correctly - and those discussions can be valuable. But we also need to step back and consider how the tool is actually being used, and the real effects we observe.
At a certain point you might need to ask what the toolmakers can do differently, rather than only blaming the users.
I mean, if your whole business is producing an endless stream of incorrect output and calling it good enough, why would you care about accuracy here? The whole ethos of the LLM evangelist, essentially, is "bad stuff is good, actually".
I pasted the image of the chart into ChatGPT-5 and prompted it with
>there seems to be a mistake in this chart ... can you find what it is?
Here is what it told me:
> Yes — the likely mistake is in the first set of bars (“Coding deception”).
The pink bar for GPT-5 (with thinking) is labeled 50.0%, while the white bar for OpenAI o3 is labeled 47.4% — but visually, the white bar is drawn shorter than the pink bar, even though its percentage is slightly lower.
So they definitely should have had ChatGPT review their own slides.
funny isn't it - makes me feel like it's kind of over-fitted to try and be logical now, so when it's trying to express a contradiction it actually can't
That would still be basic fail, you don't label a chart, you enter data, the pre-AGI computer program does the rest - draws the bars and slows labels that match the data
This half makes sense to me - 'deception' is an undesirable quality in an llm, so less of it is 'better/more' from their audiences perspective.
However, I can't think of a sensible way to actually translate that to a bar chart where you're comparing it to other things that don't have the same 'less is more' quality (the general fuckery with graphs not starting at 0 aside - how do you even decide '0' when the number goes up as it approaches it), and what they've done seems like total nonsense.
clearly the error is in the number, most likely the actual value is 5.0 instead of 50.0 which matches the bar height and also the other single digit GPT-5 results for metrics on the same chart
[0]: https://bruijn.marvinborner.de/std/Number_Tuple.bruijn.html
reply