I’ve been using Warp (for the AI features) for a while now, but less and less these days. They’re way too agile with the UI/UX, things change around too much for it to be what it is supposed to be.
The first part of your post sounds almost like an ad for the Bambu Lab P1S. The second part sounds more like the Prusa CORE One kit (build volume is not a perfect match).
I really wouldn't bother buying anything else as a beginner. Pick between these two.
It's a weird thing in 3D printing right now that if you don't have the open source stuff as a requirement you get better print quality and reliability for half the price with Bambu Lab.
While I can't directly compare with BambuLab print results, the prints I get out of my Prusa Core One with current Firmware and Slicer are stellar and surpass even the prints of my MK4S (that being a benchmark in Quality in my bubble of the Internet).
I really like Trigger but I would have loved some way to trigger it from the FE, it was a small part of our use case but it sucked having to implement a proxy for it.
If it were possible, how would you have prevented your Trigger account from being abused once a bad actor got its connection details by inspecting your FE app with DevTools?
I see a couple of people here bashing on the practicality of this project. That's obviously not the point, it's an interesting weird use case that's more explorative/educational than practical. I thought it was an interesting and inspiring read!
We're protecting the global poor by not letting them work for us is such a backwards piece of rhetoric that I'm always surprised by how casually it's brought up in stories like this.
DDG was such an absolutely subpar experience for me (used it for 6-8 months) that it wouldn't surprise me if nobody would _actually_ say it's better in a blind test. They just don't like Google.
Kagi on the other hand really blew me away, I have no idea how they're doing it, but I'm only using Google for one thing these days (besides G Shopping & G Maps) which is when I want to use ad spend as proxy for quality/reliability of the company (for example when I'm looking for a local moving company in my city).
having used ddg for more than 5 years, i'd only agree with your sentiments in the first year or so. at first, i'd just use the bangs feature and do real search on google using !g bang. but for past few years i am exclusively searching on ddg. the only exception being searching travel directions.
a lot of search people do can be done in domain-specific websites, which ddg bangs allow you to do with ease. i am surprised to see that people are driven enough to pay for search before fully making the most of ddg.
Since 2015 I made some futile attempts to switch to DDG, and mostly used google, but last year I noticed myself using DDG 90% of times, because DDG got far better, and G turned in a page of junk, or showing generic stuff instead of specific. G spectacularly shot itself in the foot.
> Additionally, a lot of water is also used in cooling for the servers that run all that software. Per conversation of about 20 to 50 queries, half a litre of water evaporates – a small bottle, in other words.
What? I have no idea about server farm cooling, can anyone explain this to me?
Alternatively, you can cycle closed-loop water through a chiller to a hot server and back, or just cool the air around the server (CRACs and CRAHs) using either water or refrigerant as the heat-carrying material.
For datacentres that require air conditioning as opposed to natural ventilation (most of them) a very popular approach is to use evaporative cooling towers [1] in combination with W2W chiller units [2]. The chillers cool the internal water circuit and heat the external water circuit, the excess heat is dumped to the environment by evaporating water in the cooling towers.
Of course it's possible to use air-cooled equipment and this is more common in cooler climates or smaller data centres, so it's not a rule of nature that cooling servers wastes water but it's certainly a very common outcome.
I really don't think that is how that works? Water cooling in servers works the same as in desktops, just with way bigger radiators. Maybe there are tapping into some other available way of cooling
> Water cooling in servers works the same as in desktops, just with way bigger radiators.
Data centers have separate systems to remove heat from the entire datacenter. These are often evaporative coolers, which means the water is evaporated away.
A better analogy would be the HVAC system for your house. Your computer dumps heat into the house, the HVAC system removes the heat from the house to the environment. It's the latter part that uses evaporative cooling in many data centers.
The whole point of water cooling on electronics is that the closed loop cycles the vapor away from the heat generating part and is then cooled by a radiator, making it condense back to a liquid and flow back to the hot thing, so whoever wrote that line is severely misinformed.
Indeed, an article making the opposite point would probably make more sense.
This article even opens with the point that smaller languages have less Wikipedia content and somehow blames "Tech". A language having less articles is a people/popularity problem. You being able to read them anyway through machine generated translations is a tech thing.
> How do we know that $7 trillion invested into LLMs and their infrastructure would not simply exacerbate those costs, grinding down content creators, women, and the environment, undermining democracy, destroying jobs, etc?
“Destroying jobs” is pretty much a _good_ thing. I think we’re all pretty happy we don’t need dedicated people anymore for a lot of tedious tasks that we take for granted these days (washing clothes etc)
reply