I guess this PRNG has recurrence period much less than 52! (or wasn't seeded appropriately), so that only sampled a fraction of the distribution
"... the system ties its number generation to the number of seconds that have passed since midnight, resetting once each day, which further limits the possible random values. Only about 86 million arrangements could be generated this way,"
If you take N samples of a real signal you will get N/2+1 bins of information from the DFT, covering 0Hz out to about half the sampling rate.
The bins do not actually measure a specific frequency, more like an average of the power around that frequency.
As you take more and more samples, the bin spacing gets finer and the range of frequencies going into the average becomes tighter (kind of). By collecting enough samples (at an appropriate rate), you can get as precise a measurement as you need around particular frequencies. And by employing other tricks (signal processing).
If you graph the magnitude of the DFT, signals that are a combination of power at just a few frequencies show just a few peaks, around the related bins. Eg a major chord would show 3 fundamental peaks corresponding to the 3 tones (then a bunch of harmonics)
It's depressing how common this accusation is become here. Before LLM idiot ruined everything, you know what? People wrote things you wouldn't like, in a way you wouldn't like. Especially on their blogs. HN so smart though they can immediately see, tenured Yale professor has no life and is trying to win the message board game with AI slop!
Nobody in this thread accused LLM of writing the OP. Instead, they are saying that it is dumb and easy in the way a lot of LLM writing is, and that LLMs wouldn't have any problem writing it. This author is being disliked in the traditional way, but with a LLM-assisted proof that actually shows that LLMs can write this crap, and write it well.
The real proposal should be that slate dot com type "Is Food Really Good For You?" or "Hands Are A Completely Unnecessary Part Of The Arm" article authors should be replaced by LLM.
I like the proliferation of LLM slop, because it involuntarily reveals the emptiness of an enormous proportion of actual human writing. You can't help but see it, even if you don't want to. You end up forced to talk about the author's resume in defense.
Convolution with dirac delta will give you an exact sample of f(0), and in principle a whole signal could be constructed as a combination of delayed delta signals - but we can't realize an exact delta signal in most spaces, only approximations.
As a result we get finite resolution and truncation of the spectrum. So "Fourier analysis with pre-applied lowpass filter" would be analysis of sampled signals, the filter determined by the sampling kernel (delta approximator) and properties of the DFT.
But so long as the sampling kernel is good (that is the actual terminology), we can form f exactly as the limit of these fuzzy interpolations.
The term "resolution of the identity" is associated with the fact that delta doesn't exist in most function spaces and instead has to be approximated. A good sampling kernel "resolves" the missing (convolutional) identity. I like thinking of the term also in the sense that these operators behave like the identity if it were only good up to some resolution.
748 is not tight. As given in the article, k=643 is independent of ZFC, and the author speculates that it's possible that something as small as BB(9) could be as well.
The 748/745/643 numbers are just examples of actual machines people have written, using that many states, that halt iff a proof of "false" is found.
At any rate, given the precise k, I believe your intuition is correct. I've heard this called 'proof by simulation' -- if you know a bound on BB(N), you can run a machine for that many steps and then you know if it will run forever. But this property is exactly the intuition for why it grows so fast, and why we will likely never definitively know anything beyond BB(5). BB(6) seems like it might be equivalent to the Collatz conjecture, for example.
When it was essential to perception. Its necessary to have a model of a circle (, elipse...) in order to correctly parse (at least,) visual perception -- because space is inherently geometrical.
You need some additional assumptions. Only near equilibrium / thermodynamic limit is system linear in entropy. What governs physical processes such as you mention is conservation, dynamics pushing equipartition of energy - but outside that regime these are no longer "theorems".
That alone would be revolutionary - but still aspirational for now. The other day Gemini mixed up left and right on me in response to basic textbook problem.
"... the system ties its number generation to the number of seconds that have passed since midnight, resetting once each day, which further limits the possible random values. Only about 86 million arrangements could be generated this way,"