That's the consequence of 4 freeways all (I-580, I-80, I-880, SH-24) dumping their traffic onto a bridge, and using metering lights to try and keep the bridge itself working.
Starting at 9:46 is when it goes from wow to WOW. The last 2 minutes in particular are incredible, including the bizarre artifacts in the last 15 seconds before the stream dies.
Having a front door physically allows anyone on the street to come to knock on it. Having a "no soliciting" sign is an instruction clarifying that not everybody is welcome. Having a web site should operate in a similar fashion. The robots.txt is the equivalent of such a sign.
And, despite what ideas you may get from the media, mere trespass without imminent threat to life is not a justification for deadly force.
There are some states where the considerations for self defense do not include a duty to retreat if possible, either in general (“stand your ground" law) or specifically in the home (“castle doctrine"), but all the other requirements (imminent threat of certain kinds of serious harm, proportional force) for self-defense remain part of the law in those states, and trespassing by/while disregarding a ”no soliciting” would not, by itself, satisfy those requirements.
>No one is calling for the criminalization of door-to-door sales
Ok, I am, right now.
It seems like there are two sides here that are talking past one another: "people will do X and you accept it if you do not actively prevent it, if you can" and "X is bad behavior that should be stopped and shouldn't be the burden of individuals to stop". As someone who leans to the latter, the former just sounds like restating the problem being complained about.
Yes, because most of the things that people talk about (ChatGPT, Google SERP AI summaries, etc.) currently use tools in their answers. We're a couple years past the "it just generates output from sampling given a prompt and training" era.
It depends - some queries will invoke tools such as search, some won't. A research agent will be using search, but then summarizing and reasoning about the responses to synthesize a response, so then you are back to LLM generation.
The net result is that some responses are going to be more reliable (or at least coherently derived from a single search source) than others, but at least to the casual user, maybe to most users, it's never quite clear what the "AI" is doing, and it's right enough, often enough, that they tend to trust it, even though that trust is only justified some of the time.
That's not the problem. The problem is that you're adding a data dependency on the CPU loading the first byte. The branch-based one just "predicts" the number of bytes in the codepoint and can keep executing code past that. In data that's ASCII, relying on the branch predictor to just guess "0" repeatedly turns out to be much faster as you can effectively be processing multiple characters simultaneously.
I am pretty sure CPUs can speculative load as well. In the CPU pipeline, it sees that there's an repeated instruction to load, it should be able to dispatch and perform all of it in the pipeline. The nice thing is that there is no chance of hazard execution here because all of this speculative load is usable unlike the 1% chance where the branch would fail which causes the whole pipeline to be flushed.
No, that's not the stance for electrical utilities (at least in most developed countries, including the US): the vast majority of weather events cause localized outages (the grid as a whole has redundancies built in; distribution to (residential and some industrial) does not. It expects failures of some power plants, transmission lines, etc. and can adapt with reserve power, or, in very rare cases by partial degradation (i.e. rolling blackouts). It doesn't go down fully.
You're measuring a cached compile in the subsequent runs. The deps.compile probably did some native compilation in the dep folder directly rather in _build.
No their results are correct. It roughly halved the compilation time on a newly generated Phoenix project. I'm assuming the savings would be more extensive on projects with multiple native dependencies that have lengthy compilation.
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
________________________________________________________
Executed in 37.75 secs fish external
usr time 103.65 secs 32.00 micros 103.65 secs
sys time 20.14 secs 999.00 micros 20.14 secs
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
________________________________________________________
Executed in 16.71 secs fish external
usr time 2.39 secs 0.05 millis 2.39 secs
sys time 0.87 secs 1.01 millis 0.87 secs
rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=10 mix deps.compile
________________________________________________________
Executed in 17.19 secs fish external
usr time 2.41 secs 1.09 millis 2.40 secs
sys time 0.89 secs 0.04 millis 0.89 secs
Oh, interesting. I guess `time` is only reporting the usr/sys time of the main process rather than the child workers when using PARTITION_COUNT higher than 1?
reply