If a compressor can compress every input of length N bits into fewer than N bits, then at least 2 of the 2^N possible inputs have the same output. Thus there cannot exist a universal compressor.
Modify as desired for fractional bits. The essential argument is the same.
No, the subreddit has applied custom css to do that. It's the mildly infuriating subreddit. There's also an image of a hair visible on widescreen monitors, to make you think there's a hair on your display.
> The Outlook is Superficially Stable, defined here as “By outward appearances stable unless, you know, things happen. Then we’ll downgrade after the shit hits the fan.”
Why do you think the current government would be the slightest bit interested in solutions to housing, inflation or healthcare if Epstein wasn't an issue?
If you are transferring a conversation trace from another model, ... to bypass strict validation in these specific scenarios, populate the field with this specific dummy string:
"thoughtSignature": "context_engineering_is_the_way_to_go"
It's an artifact of the problem that they don't show you the reasoning output but need it for further messages so they save each api conversation on their side and give you a reference number. It sucks from a GDPR compliance perspective as well as in terms of transparent pricing as you have no way to control reasoning trace length (which is billed at the much higher output rate) other than switching between low/high but if the model decides to think longer "low" could result in more tokens used than "high" for a prompt where the model decides not to think that much. "thinking budgets" are now "legacy" and thus while you can constrain output length you cannot constrain cost. Obviously you also cannot optimize your prompts if some red herring makes the LLM get hung up on something irrelevant only to realize this in later thinking steps. This will happen with EVERY SINGLE prompt if it's caused by something in your system prompt. Finding what makes the model go astray can be rather difficult with 15k token system prompts or a multitude of MCP tools, you're basically blinded while trying to optimize a black box. Obviously you can try different variations of different parts of your system prompt or tool descriptions but just because they result in less thinking tokens does not mean they are better if those reasoning steps where actually beneficial (if only in edge cases) this would be immediately apparent upon inspection but hard/impossible to find out without access to the full Chain of Thought. For the uninitiated, the reasons OpenAI started replacing the CoT with summaries, were A. to prevent rapid distillation as they suspected deepSeek to have used for R1 and B. to prevent embarrassment if App users see the CoT and find parts of it objectionable/irrelevant/absurd (reasoning steps that make sense for an LLM do not necessarily look like human reasoning). That's a tradeoff that is great with end-users but terrible for developers. As Open Weights LLMs necessarily output their full reasoning traces the potential to optimize prompts for specific tasks is much greater and will for certain applications certainly outweigh the performance delta to Google/OpenAI.
It's worth noting these notes are 11 years old. The first give-away was the comment that in Python 3/2 is an integer, which is indeed true in Python 2 but not in Py3.
For modern users of Z3, you'd want to do `pip install z3-solver` rather than use `Z3Py` mentioned at the very bottom of this doc.
> gas sets the price in the merit order so we don’t want it on 24/7
I never quite understood the logic for this. Sure, if you overlay a simple upward sloping cost curve on a downward sloping demand-price curve, the market-clearing price is where they intersect, and that in practice much of the time is a gas generator.
But there must be a million other aspects that can affect what price needs to be paid to secure the capacity below that point. Surely only part of the total area under that market-clearing price needs to accrue to the generators?
And if generators are getting windfall profits, can't the market rules be adjusted so more of it can given to the consumers in the form of lower energy prices?
Can someone explain this? Maybe that is what actually happens, just it is too complex for the mass media.
If a compressor can compress every input of length N bits into fewer than N bits, then at least 2 of the 2^N possible inputs have the same output. Thus there cannot exist a universal compressor.
Modify as desired for fractional bits. The essential argument is the same.
reply