Hacker Newsnew | past | comments | ask | show | jobs | submit | sligocki's commentslogin

Not sure if this is a joke, but actually that is guaranteed to be true. It is proven that for all n: BB(n+1) >= BB(n) + 3. But it is not proven that BB(n+1) >= BB(n) + 4, haha.


50/18 reduces to 25/9 right?


Conway's Fractran traditionally compares the accumulator against reduced fractions, but computationally-speaking, getting to the gcd of a fraction does little more than getting rid of otherwise valuable information used during comparison to match against a restricted set of fractions. The support for catalysts, symbols found on both sides of a rewrite rule, makes for a simpler and faster implementation.

  15/6 red [green] > blue [green]
  5/2 red > blue

  red green
These two fractions are not equal and reducing the first into the second, eliminates the capability to match against it only when the catalyst green is present in the accumulator.


I see, so you are using a different model for computation that does not use rational numbers, but instead pairs of integers. From a computational point of view, that makes a lot of sense, disallowing catalysts is quite annoying, but I would not call this Fractran, instead I would call it something like a prioritized chemical reaction network or something like this. The wikipedia article explicitly states:

> The same variable cannot be both decremented and incremented in a single instruction (otherwise the fraction representing that instruction would not be in its lowest terms). Therefore each FRACTRAN instruction consumes variables as it tests them.


I've seen this variance used a lot in the code golfing challenges I participate in, I feel like if I called it something other than Fractran, it wouldn't be long before someone points out that this is quite like fractran and I ough to call that


And 297/275 to 27/25?


Can you clarify what you mean by BBλ being "provably optimal"? IIUC BB functions for any Turing-complete computation model should be equivalent up to a constant. Maybe something like: there exists N, c: forall n >= N: BBλ(n) < BB(n + c) and vice-versa. I am not sure what the exact equation will be off-hand, maybe in this case we will need to replace BB() with a version based on binary encoding of TMs.


As the OEIS entry says:

> for any other busy beaver BB based on self-delimiting programs, there is a constant c such that a(c+n) >= BB(n)

This requires an information theoretic measure of program size, such as bits or bytes, whereas the TM-based BB measures in states. I don't believe there are additively tight bounds on how many states are needed to encode an arbitrary string of n bits.


Yes, it seems that BB(3, 4) >>> BB(5, 2) (BB(5) = BB(5, 2)). This is not too surprising since BB(3, 4) has 12 transitions in it's table (3*4), while BB(5, 2) only has 10. But it seems that also BB(3, 4) >> BB(6, 2) (which both have the same number of transitions, so it appears that having more symbols is valuable to these little TMs.


I think I can explain that. If you only have two symbols, you can encode the same information as having four symbols in two cells, so performing the same operation on them requires an extra state to read the first of the two and then move to one of two other states to handle the second. Less symbols requires more states.

This is related to the linear speed-up theorem, which roughly states that you can speed up any TM by expanding its alphabet. And speed-up is not what BB is about.

So actually, it would make sense to limit the the busy beavers to BB(n, 2) only.


Counting the number of distinct TMs is not a simple task. The way you count it is the most broad (the count of all tables of values where each cell can have any (symbol, direction, state) combination). But this is a significant overcount of the bits needed to describe an arbitrary TM. for the BB(3, 4) case, there are only about 600B distinct TMs using the Tree Normal Form aka Brady's algorithm (https://nickdrozd.github.io/2022/01/14/bradys-algorithm.html) which comes out to <40 bits.


Similarly, 2^n would be a huge overcount of the number of n-bit closed terms, which is given by OEIS sequence A114852 [1]. There are 36140122614 ~ 2^35 closed terms of size 49 bits (length of a straightforward self-delimiting encoding). So in that sense it's still a smaller lambda term computing an incomprehensibly larger output.

[1] https://oeis.org/A114852


Do you know of any (hand-wavy is ok) intuitive explanation for why this machine will halt, beyond the inductive proof's given in your article? Just watching the machine run, I would guess at some kind of infinite behavior. It is remarkable that this is not the case.


> Do you know of any (hand-wavy is ok) intuitive explanation for why this machine will halt, beyond the inductive proof's given in your article?

If the Halting problem could be solved by intuition, it wouldn't be much of a problem.


Yet in this case there is a proof that it halts, and that proof contains an argument. I was asking for an explanation of the essence of that argument, if simplifying it is possible.


The Busy Beaver problem sits somewhere on the range from "intellectual curiosity" to "lens that allows us to view the edges of uncomputability". I would guess that the majority of people doing work here are hobbyists (including myself). In fact, when Tibor Rado first introduced BB, he introduced it as the "Busy Beaver Game", so it has had a playful energy since the beginning :)


His survey 3 years ago has kicked off quite a flurry of Busy Beaver activity of which my entire blog and https://bbchallenge.org/ are a couple examples.


Thank you! I'm so glad to hear!


Yeah, I've oversimplified a bit with this title. The more accurate statement is in the first paragraph of the article: "Solving the BB(3, 3) problem is at least as hard as solving this Collatz-like problem."

I also agree somewhat on the one trajectory vs. multiple trajectories point. However, note that (assuming we live in the world where this TM never halts) proving a single trajectory in this system is "harder" than a single trajectory in the classic Collatz conjecture. Specifically, (assuming the Collatz conjecture is true) proving any single trajectory is "simply" a finite computation. However, proving a single trajectory from the article requires showing that it never halts which will require some more fancy math!

Anywho, I don't want to oversell it. This does not prove that BB(3, 3) requires proving the Collatz conjecture or any existing well-studied open problem in Math. But I think it's sort of a "second best" result: As hard a problem akin to a well studied problem.

How hard is this Collatz-like problem? Well, let's see if anyone can solve it :)


> "Solving the BB(3, 3) problem is at least as hard as solving this Collatz-like problem."

> How hard is this Collatz-like problem? Well, let's see if anyone can solve it :)

I thought John Conway already proved that all instances of the halting problem can be converted to a Collatz-like problem [1]. So one could say this about all BB values, not just BB(3, 3). Some will be easy, some will be hard, but all are reducible to Collatz-like problems IIUC.

[1] https://julienmalka.me/collatz.pdf


You are right that every TM can be converted into a Collatz-like problem using Conway's Fractran compilation. So technically the statement "Solving the BB(n, k) problem is at least as hard as solving a Collatz-like problem." is true in general.

However, the Collatz-like problems you will get from this completion will be gigantic, they will not have distilled the problem into a similar description of it's behavior, but instead created a more complicated way of observing that behavior. The Collatz-like problem I present here is a simplification of the behavior of this TM. If you observe the machine running you will see that it is effectively completing these transitions.

In other words, I am not arbitrarily choosing to convert this to a Collatz-like problem simply because it is possible. I am looking at the behavior of this machine and that behavior turns out to be Collatz-like naturally.

Of course none of this proves that my Collatz-like problem really is hard ... but as someone else here mentioned, being hard is not a mathematical thing, it is a belief we have about certain problems we cannot solve after considerable effort.


I think the point here is that it was not known whether all 3,3 halting problems are reducible to trivial Collatz-like problems. Unless someone is able to observe that sligicko's is actually trivial, then this provides a counterexample.


> I think the point here is that it was not known whether all 3,3 halting problems are reducible to trivial Collatz-like problems.

True, but it still isn't known either way. Nothing changed in that regard.


'trivial' isn't a mathematical concept. If no one has shown why this problem is trivial, it isn't trivial.


This problem isn't trivial, but neither are the ones we could already reduce BB(3, 3) to (via Conway's construction).


Some reductions via Conway's method give non-trivial Collatz problems even though other simple techniques show the halting question for those machines to be trivial; the Conway reduction sometimes reduces trivial Turing machines to non-trivial Collatz problems.


The issue is the "run the turing machine for BB(748) steps" part. We don't know what BB(748) is. If the god of busy beavers came to us and told us that value, then we could (in theory) run the TM that long and just like you say, that would prove whether ZFC is consistent. But in order for us mere mortals to compute BB(748) we would effectively need to figure out if this specific 748-state TM ever halted (along with all the other 748-state TMs).


To put it another way, an oracle telling us an upper bound on BB(748) would be strictly more powerful than an oracle telling us ZFC is consistent.


We don't need to know BB(748), just an upper bound on BB(748).

... which means we can't prove any upper bound on BB(748) within ZFC.


A lot of math draws conclusions from something not possible in practice (e.g. "let's take all the natural numbers and ...").

All the parent says: if it's possible even theoretically to learn the answer to the BB problem then we've proven something that cannot be proven, as shown by Godel incompleteness.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: