Hacker Newsnew | past | comments | ask | show | jobs | submit | yuliyp's commentslogin

this is an energy storage ("battery") system, not a generation system.

ahh that makes total sense lol

That's the consequence of 4 freeways all (I-580, I-80, I-880, SH-24) dumping their traffic onto a bridge, and using metering lights to try and keep the bridge itself working.

Starting at 9:46 is when it goes from wow to WOW. The last 2 minutes in particular are incredible, including the bizarre artifacts in the last 15 seconds before the stream dies.


I'd presume they have the ability to deploy a previous artifact vs only tip-of-master.


Having a front door physically allows anyone on the street to come to knock on it. Having a "no soliciting" sign is an instruction clarifying that not everybody is welcome. Having a web site should operate in a similar fashion. The robots.txt is the equivalent of such a sign.


No soliciting signs are polite requests that no one has to follow, and door to door salesman regularly walk right past them.

No one is calling for the criminalization of door-to-door sales and no one is worried about how much door-to-door sales increases water consumption.


If a company was sending hundreds of salesmen to knock at a door one after the other, I'm pretty sure they could successfully get sued for harassment.


Can’t Americans literally shoot each other for trespassing?


Generally, legally, no, not just for ignoring a “no soliciting” sign.


But they’re presumably trespassing.


And, despite what ideas you may get from the media, mere trespass without imminent threat to life is not a justification for deadly force.

There are some states where the considerations for self defense do not include a duty to retreat if possible, either in general (“stand your ground" law) or specifically in the home (“castle doctrine"), but all the other requirements (imminent threat of certain kinds of serious harm, proportional force) for self-defense remain part of the law in those states, and trespassing by/while disregarding a ”no soliciting” would not, by itself, satisfy those requirements.


> door to door salesman regularly walk right past them.

Oh, now I understand why Americans can't see a problem here.


>No one is calling for the criminalization of door-to-door sales

Ok, I am, right now.

It seems like there are two sides here that are talking past one another: "people will do X and you accept it if you do not actively prevent it, if you can" and "X is bad behavior that should be stopped and shouldn't be the burden of individuals to stop". As someone who leans to the latter, the former just sounds like restating the problem being complained about.


> No one is calling for the criminalization of door-to-door sales

Door-to-door sales absolutely are banned in many jurisdictions.


And a no soliciting sign is no more cosmically binding than robots.txt. It's a request, not an enforceable command.


Tell me you work in an ethically bankrupt industry without telling me you work in an ethically bankrupt industry.


Yes, because most of the things that people talk about (ChatGPT, Google SERP AI summaries, etc.) currently use tools in their answers. We're a couple years past the "it just generates output from sampling given a prompt and training" era.


It depends - some queries will invoke tools such as search, some won't. A research agent will be using search, but then summarizing and reasoning about the responses to synthesize a response, so then you are back to LLM generation.

The net result is that some responses are going to be more reliable (or at least coherently derived from a single search source) than others, but at least to the casual user, maybe to most users, it's never quite clear what the "AI" is doing, and it's right enough, often enough, that they tend to trust it, even though that trust is only justified some of the time.


Perhaps those were different iterations of the technique over time. Start with marking cards to identify face cards, then move on to x-ray table.


That's not the problem. The problem is that you're adding a data dependency on the CPU loading the first byte. The branch-based one just "predicts" the number of bytes in the codepoint and can keep executing code past that. In data that's ASCII, relying on the branch predictor to just guess "0" repeatedly turns out to be much faster as you can effectively be processing multiple characters simultaneously.


I am pretty sure CPUs can speculative load as well. In the CPU pipeline, it sees that there's an repeated instruction to load, it should be able to dispatch and perform all of it in the pipeline. The nice thing is that there is no chance of hazard execution here because all of this speculative load is usable unlike the 1% chance where the branch would fail which causes the whole pipeline to be flushed.


No, that's not the stance for electrical utilities (at least in most developed countries, including the US): the vast majority of weather events cause localized outages (the grid as a whole has redundancies built in; distribution to (residential and some industrial) does not. It expects failures of some power plants, transmission lines, etc. and can adapt with reserve power, or, in very rare cases by partial degradation (i.e. rolling blackouts). It doesn't go down fully.


Spain and Portugal had a massive power outage this spring, no?


Yeah, and it has a 30 page Wikipedia article with 161 sources (https://en.wikipedia.org/wiki/2025_Iberian_Peninsula_blackou...). Does that seem like a common occurrence?


You're measuring a cached compile in the subsequent runs. The deps.compile probably did some native compilation in the dep folder directly rather in _build.


No their results are correct. It roughly halved the compilation time on a newly generated Phoenix project. I'm assuming the savings would be more extensive on projects with multiple native dependencies that have lengthy compilation.

    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
    ________________________________________________________
    Executed in   37.75 secs    fish           external
       usr time  103.65 secs   32.00 micros  103.65 secs
       sys time   20.14 secs  999.00 micros   20.14 secs

    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
    ________________________________________________________
    Executed in   16.71 secs    fish           external
       usr time    2.39 secs    0.05 millis    2.39 secs
       sys time    0.87 secs    1.01 millis    0.87 secs
    
    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=10 mix deps.compile
    ________________________________________________________
    Executed in   17.19 secs    fish           external
       usr time    2.41 secs    1.09 millis    2.40 secs
       sys time    0.89 secs    0.04 millis    0.89 secs


Similar result on one of my real projects that's heavier on the Elixir dependencies but that only has 1 additional native dependency (brotli):

    mise use elixir@1.19-otp-26 erlang@26
    
    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=1 mix deps.compile
    ________________________________________________________
    Executed in   97.93 secs    fish           external
       usr time  149.37 secs    1.45 millis  149.37 secs
       sys time   28.94 secs    1.11 millis   28.94 secs
    
    rm -rf _build/ deps/ && mix deps.get && time MIX_OS_DEPS_COMPILE_PARTITION_COUNT=5 mix deps.compile
    ________________________________________________________
    Executed in   42.19 secs    fish           external
       usr time    2.48 secs    0.77 millis    2.48 secs
       sys time    0.91 secs    1.21 millis    0.91 secs


Oh, interesting. I guess `time` is only reporting the usr/sys time of the main process rather than the child workers when using PARTITION_COUNT higher than 1?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: