I still feel varying the prompt text, number of tries, and varying strictness combined with only showing the result most liked dilute most of the value in these test. It would be better if there was one prompt 8/10 human editors understood and implemented correctly and then every model got 5 generation attempts with that exact prompt on different seeds or something. If it were about "who can create the best image with a given model" then I'd see it more, but most of it seems aimed at preventing that sort of thing and it ends up in an awkward middle zone.
E.g. Gemini 2.5 Flash is given extreme leeway with how much it edits the image and changes the style in "Girl with Pearl Earring" only to have OpenAI gpt-image-1 do a (comparatively) much better job yet still be declared failed after 8 attempts, while having been given fewer attempts than Seedream 4 (passed) and less than half the attempts of OmniGen2 (which still looks way farther off in comparison).
You have to REALLY be into AI to do this for generation/API cost reasons (or willing to have this as a hacking project of the month expense). Even ignoring electricity, a 16 GB 5060 Ti is more expensive than 16,000 image generations. Assuming you do one every 15 seconds, that's 240,000 seconds -> more than 2 months of usage at an hour a day of generations.
If you've already got a decent GPU (or were going to get one anyways) then cost isn't really a consideration, it's just that you can already do it. For everyone else, you can probably get by just using things like Google's AI Studio for free.
The problem in the above was not actually caused by the AP being open, nor is it just limited to APs in the path between you and whatever you're trying to connect to on the internet. Another common example is ISPs which inject content banners into unencrypted pages (sometimes for billing/usage alerts, other times for ads). Again, this is just another example - you aren't going to whack-a-mole an answer to trusting everything a user might transit on the internet, that's how we came to HTTPS instead.
> There are still legitimate uses for HTTP including reading static content.
There are valid reasons to do a lot of things which don't end up making sense to support in the overall view.
> Say we all move to HTTPS but then let’s encrypt goes away, certificate authority corps merge, and then google decides they also want remote attestation for two way trust or whatever - the whole world becomes walled up into an iOS situation. Even a good idea is potentially very bad at the hands of unregulated corps (and this is not a hypothetical)
There are at least 2 other decent sized independent ACME operators at this point, but say all of the certificate authority corps merge but we planned ahead and kept HTTP support: our banking/payments, sites with passwords, sites with PII, medical sites, etc is in a stranglehold but someone's plain text blog post about it will be accessible without a warning message. Not exactly a great victory, we'll still need to solve the actual problem just as desperately at that point.
.
The biggest gripe I have with the way browsers go about this is they only half consider the private use cases, and you get stuck with the rough edges. E.g. here they call private addresses out to not get a warning, but my (fully in browser, single page) tech support dump reader can't work when opened as a file:/// because the browser built-in for calculating an HMAC (part of WebCrypto) is for secure contexts only, and file:/// doesn't qualify. Apart from being stupid because they aren't getting rid of JavaScript support on file:/// origins until they just get rid of file:/// completely and it just means I need a shim, it's also stupid because file:/// is no less a secure origin than localhost.
I'd like for every possible "unsecure" private use case to work, but I (and the majority of those who uses a browser) also has a conflicting want to connect to public websites securely. The options and impacts for these conflicting desires have to be weighed and thought through.
It depends, but for typical networking I'd say Ubiquti is actually offering better pricing here (outside of 10G LR) - and I'm saying that as someone who has sold 10s of thousands of FS modules to customers.
Note: Prices in () are the costs outside of the limited time mark-down period.
Side note for the HN crowd: For ridiculous homelab 100G shenanigans look for Intel 100G-CWDM4 on sites like Ebay. They go for $4 and work with SM LC fiber from 0-2000 meter runs, making great DAC replacements (cheaper+thinner replaceable cabling). They run great, I've had 8 going for a year. Even if all 8 failed tomorrow and I bought 8 more that's still cheaper than a single 100G SR4 from FS. You can pair these with used 100G NICs for ~$100, making a 100G direct connection between 2 machines ~$250 after shipping+tax.
For high speed home stuff, I usually pick up some old Mellanox infiniband cards and cables. They're usually dirt cheap and insanely quick. Difficult to work with if you do not know what your are doing.
Whats the best solution for short runs (rack) between Mikrotik switches and Dell servers. Will a DAC still work between different vendors or is it always best to buy individual transceivers?
> Ubiquti is actually offering better pricing here (outside of 10G LR)
Ubiquiti's 10GB LR of $59 is for a 2-pack, not per-module. So that still comes out cheaper than FS for the sale duration at least. Not by a lot, granted, but still cheaper.
Nice prices from Ubiquiti. I think fs mostly competes against Cisco which have much higher prices. IIRC we hade like a 95% discount off Ciscos list price for optics.
For anyone else furiously going back and forth between TFA and this comment: they mean the actual website of TFA has these errors, not the content of TFA.
RustDesk is an alternative to other remote desktop software, JetKVM is an alternative to a built-in IPMI. It could be used as a remote desktop in a pinch, but that's not really the main point.
E.g. you'd use JetKVM-like devices to re-install your OS via emulated drives, remotely control power (including hard reset, not just WoL and software shutdown), change BIOS settings, or troubleshoot a crashing box - all without relying on any specific software/capabilities/behavior of the given box. Meanwhile you'd use remote desktop software when you just want the desktop to present itself remotely.
The thing I don't like with these kinds of articles is rather than list potential pros/cons they make a wholly one sided story everyone is supposed to agree with the whole way and say "oh wow, yeah" at the end. Reality is it breaks right at the start - you don't really know when a good time to call someone is by the sun. You know because of when they wake up, when they go to bed, what hours they work, what you're calling them about, when they like to eat meals, etc. All of that varies by many hours within a timezone based on culture or individual, so it derails that build up pitch. It's a given the author isn't particularly swayed by that, but that they don't even talk about the detail and just move on spoils the rest of the (well put together) list IMO.
One way or the other I don't think we'll make a big shift in timekeeping until/if we ever have a sizable population off Earth. Of course, that introduces its own time problems we don't have to deal with as much all being so close together :).
I don’t personally see a lot of difference in consulting a chart of “what time is it in x country” vs a chart of say “the time business starts in country x”.
They’d be the same exact list. “offset +9 hours”- the only semantic difference would be that clocks don’t change.
I should mention that I spent a little bit of time in Saudi Arabia and expecting them to be out and about at 7pm like in Western Europe and the USA is crazy, they seem to get up later and keep going until 3am. I’ve never seen rush hour at 3am until I spent time in Riyadh. It’s a false construct we’ve decided on: that everyone follows the same time pattern anyway.
Why do we believe the world needs to wake up at 7am? If nothing else its so incredibly arbitrary to begin with.
Scale; comparing DC cooling to a large building's is like comparing street noise in a suburb to a highway.
It'd be nice to have some hard data on it though. Sometimes the hum is "god damn, that is annoying!" and other times it's someone saying "20 miles that way they built a DC and now it makes the cell phone tower effects twice as bad in this area" when reality was it doesn't show up as audible a half mile away.
E.g. Gemini 2.5 Flash is given extreme leeway with how much it edits the image and changes the style in "Girl with Pearl Earring" only to have OpenAI gpt-image-1 do a (comparatively) much better job yet still be declared failed after 8 attempts, while having been given fewer attempts than Seedream 4 (passed) and less than half the attempts of OmniGen2 (which still looks way farther off in comparison).
reply