Hacker Newsnew | past | comments | ask | show | jobs | submit | phkahler's commentslogin

>> The number of companies that have this much respect for the user is vanishingly small.

I think companies shifted to online apps because #1 it solved the copy protection problem. FOSS apps are not in any hurry to become centralized because they dont care about that issue.

Local apps and data are a huge benefit of FOSS and I think every app website should at least mention that.

"Local app. No ads. You own your data."


Another important reason to move to online applications is that you can change the terms of the deal at any time. This may sound more nefarious than it needs to be, it just means you do not have to commit fully to your licensing terms before the first deal is made, which is tempting for just about anyone.

>> The great lakes are has nearly infinite water...

No they do not. The flow there is already balanced, and lake levels are lower than usual.

New York already added another tap for electric generation about 12ish years ago, and IMHO it has had an effect.


> No they do not. The flow there is already balanced, and lake levels are lower than usual.

You aren't going to meaningfully drain the lakes to cool chip fabs when the vast majority of that water will simply go back into the lake either directly or via the water cycle. It's not going to run off the land and into a river like with flood irrigation or similarly irresponsible water uses. The entire global chip industry today uses less water than the city of Hong Kong.


Heard that before.

Keep repeating the script. Short term profit at the expense of long term stability.


what the fuck are you talking about, these facilities process the water and return it to the source.

That’s not how the water system works. It’s not like all the evaporated water will end up in the lakes. California uses a lot of water for farming, it’s not like all the evaporated water ends up in the Sierras all the time. Water cycle is complex and reducing it to “it will just end up back to where it came from” is pretty reaching.

Besides it’s not just the evaporation. The leftover water concentrates a lot of the impurities that already exist in the water, and not all of it ends up in proper treatment facilities, which in turn pollutes the place wherever it ends up being. This is actually a problem in parts of Oregon. https://www.rollingstone.com/culture/culture-features/data-c...


California is very arid, when water evaporates it rains out over the ocean or farther north. The upper mid-west is very wet and the evaporated water will come back down over the Great Lakes watershed which is enormous.

> This is actually a problem in parts of Oregon

The problem in that part of Oregon was preexisting contamination in the drinking water.

"the county’s underground water supply had been tainted with nitrates — a byproduct of chemical fertilizers used by the megafarms and food processing plants where most of his constituents worked."

Discharging a little data center water back into lake Michigan isn't going to make any difference. The entire discharge of ever data center in the world wouldn't register.


They do use evaporative cooling. A few sites aren't going to have a big impact on a Great Lake though, especially when lots of that evaporated water ends up falling in the basin.

The evaporation in the great lakes region will just end up as rain near the lakes.

Yeah, I said that.

Sorry, I think I’m failing to read carefully today!

In 2020, I took photos of Lake Michigan over topping the walls of the local harbor. Record highs.

At present Michigan-Huron is close to the 100 year average (https://www.glerl.noaa.gov/blog/2025/06/23/great-lakes-water...).

The big contributor is that we've not had particularly wet years overall since 2020.


>> I think the internet would be a lot nicer place if people were held accountable for the things they say and do.

I agree. I've often advocated for zero anonymity by default. Everyone traceable by anyone. The thinking is that bad behavior (threats and such) could be reported. There was enough pushback to make me rethink that. People will still make threats when you know who they are - less often but they will. Offline (real world) harassment is still possible too without being identified, though thats getting harder every day.

Verified identity online is not the same thing as being held accountable.


The problem with no anonimity is that not all people are rational even if they're dont have shizophrenia or something worse.

You can be a small guy doing your small thing and sharing it online. Unfortunately you never know when and why you gonna become a supervillain in eyes of craze.


Traceability and Anonymity aren't antonyms.

This fact comes up with Bitcoin a lot. I and everybody else doesn't know who a random hash is but all the activity involving that address is highly traceable. So all you need is an oracle (like a cryptoexchange) that can convert a hash into a person to enforce any penalties against a person.

Same could be true of the internet. You notice illegal activity from a specific IP; that source is responsible for that activity (they did it!). In general that IP is going to be some intermediary (like an ISP) who was relying a packet from a different IP so it'll be on them to provide the next person who is accountable and do you do this chain until you get to an end subscriber. Everybody is anonymous by default but can be traced back to an actual person.


the problem often in conflict is that the incentives aren't symmetrical. if you and somebody exactly like you are put in a ring with a knife each, you'd both have the same things to lose. but often times in real life, and much more so online, one of you has a lot less to lose.

in a conflict in the street, if he gives you a brain injury, you might lose your job, mortgage, family, etc. it's just his next stay in prison, he has nothing more than his freedom to lose for the 5th time. if you give him a brain injury, you might lose your job, your mortgage, family, etc. he'll spend some time in hospital and then he'll be back on the street doing the same thing in a year.

online, it's worse, because now you can be matched with the bum with the least to lose within a 50 miles radius.


> I agree. I've often advocated for zero anonymity by default. Everyone traceable by anyone. The thinking is that bad behavior (threats and such) could be reported. There was enough pushback to make me rethink that. People will still make threats when you know who they are - less often but they will. Offline (real world) harassment is still possible too without being identified, though thats getting harder every day.

Nowadays people can just SWAT you anonymously and cheaply. Or pressure your employer to fire you without identifying themselves to you.


Upgraded my 2400G to a 5700G with new 64GB RAM a while back, which is really the end of the road for my system. I got a solid 3x performance increase on multi-threaded apps. Also have enough RAM to play with some this AI stuff - yes even on an AMD APU. Next purchase will likely be Zen 7.

>> The idea is that you should link the front and back ends, to prevent out-of-process GPL runarounds.

Valid points, but also the reason people wanting to create a more modular compiler created LLVM under a different license - the ultimate GPL runaround. OTOH now we have two big and useful compilers!


Why all those heat sinks? Power electronics are getting very good these days with low RDS-on. Have stepper drivers not kept up?

Sadly not really.

I think we're only a few years away from BLDC servo motors taking over from steppers in 3d printers.

Ideally control algorithms for them would go into the MCU so there is proper force feedback too - ie. The system will know that there is an extruder clog by the increased extrusion force, or even set print speeds to be 'the fastest you can follow this path' rather than a fixed number of mm/sec. Ie. If the bearings get a little stiff it'll go slower rather than skipping a step.

There are some patents on sensorless servo control expiring which should cut the price of this stuff almost in half since the position sensor is one of the most expensive bits.

Power supplies are one of the more expensive parts of a 3d printer, and by having BLDC motors which can do regenerative braking, that same energy can be reused in the head and bed heaters, which should allow significantly smaller power supplies too - again with significant software complexity to make sure the bed heater primarily heats when the head is decelerating and stops heating whilst accelerating to not exceed the power budget.


Holding position with BLDC or FoC controlled motors is IMHO fairly difficult. Maybe less so in a printer where you can apply current to hold position. We usually do speed or torque control with them. Even with an encoder or equivalent it's tough to run 2 or 3 at once with a single MCU. But yeah that's why I asked about stepper drivers, my day job is FoC motor control and I'm running a pair of 2KW motors with a power board about the size of an RPi with no heat sinks.

I completely disagree. BLDC control is not far from AC-servo, and they are insanely cheap nowdays

https://www.omc-stepperonline.com/ac-servo-motor

These are EtherCAT AC Servos for a couple hundred bucks, Any small cnc project that uses steppers/small bldc is a joke IMO


Why is a security researcher using a Free VPN? The standard wisdom is "if its free, you're the product". So you're going to proxy all your sensitive traffic through a free thing? Its not great to trust paid services with your data, nevermind free stuff.

Sometimes knowing tech makes us think we're somehow better and can bypass high level wisdom.


They are not. They found it by searching for extensions that had the capability to exfiltrate data.

> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.


Why is it said that it takes a supernova to make elements heavier than iron? You're not going to get iron-iron fusion, but what about proton-iron fusion or similar? Also, we can make reactors here on earth that convert Thorium into Uranium, and we can also make plutonium in a proper reactor. We mustn't confuse reactions useful for power production with reactions for element production right? Why can't a regular star produce some heavy elements?

“Elements heavier than iron, up to bismuth, are primarily produced via the s-process (slow neutron capture) in low to medium-mass stars during their later evolutionary stages.

The remaining and heaviest elements (beyond iron and bismuth) are formed through explosive events: core-collapse supernovae generate elements between neon and nickel, while the r-process (rapid neutron capture) in supernovae and, predominantly, neutron star mergers creates elements like uranium and thorium, dispersing them into the interstellar medium for planetary formation.”

From https://www.astronomy.com/science/the-universes-guide-to-cre...


I think you're right that heavier elements can be made, it's just energy negative to do so. But without a nova they would never leave the inside of the star to find their way into a new planet.

But they do leave. Stars not large enough to go supernova do still form planetary nebulas when the more gradually lose their outer layers to space. Only the core is left behind to form a white dwarf. This will be the Sun's eventual fate.

Wouldn't the heavier elements generally sink to the core and the outer layers be composed of the lighter ones?

No, gravitational segregation like that is a very slow process and would be overwhelmed by any convection. In Earth's atmosphere, for example, it doesn't occur until very high altitude (80 km or so) where diffusion is fast enough to overcome mixing.

See also "dredge-up".

https://en.wikipedia.org/wiki/Dredge-up

"By definition, during a dredge-up, a convection zone extends all the way from the star's surface down to the layers of material that have undergone fusion."


That seems to cover elements up to carbon. Not sure heavier elements would be convected?

"The third dredge-up brings helium, carbon, and the s-process products to the surface," (emphasis added)

In the early universe, stars had so little in the way of "seeds" for the s-process to act on that the few seeds that were there absorbed large numbers of neutrons, eventually producing weird stars highly enriched in lead (the end point of the s-process). These stars have been detected from lead (and bismuth) in their spectra.

https://en.wikipedia.org/wiki/Lead_star


I mean it's not hard to do spectrometry on said nebula, and I don't think there is near enough heavier matter detected there.

s-process elements (including radioactive ones like technetium) are detected in the spectra of the stars where the process occurs, which means they are right out at the "surface".

>> But without a nova they would never leave the inside of the star to find their way into a new planet.

Sure dispersion takes a supernova, but production is a different word ;-)



In a star, a huge number of reactions take simultaneously due to collisions between nuclei. Some collisions result in the fusion of lighter nuclei into a heavier nucleus, other collisions result in the fission of a heavy nucleus into lighter nuclei.

At iron 56 there is a peak in binding energy, both for lighter and heavier nuclei the binding energy is lower.

It is possible for nuclei with lower binding energy to form after a collision, but the probability for this to happen becomes lower and lower with decreasing binding energy.

Thus if one computes the probabilities of the reactions that happen during collisions one can compute the abundances of chemical elements that are reached when there is an equilibrium between the rates at which a certain chemical element is created and destroyed.

At this equilibrium, there is a maximum abundance for iron 56 and the heavier nuclei have abundances that decrease very quickly with the atomic number. For example, zinc may be 600 to 700 times less abundant than iron and germanium may be 7000 to 8000 times less abundant than iron.

Therefore, in an old star, which reaches equilibrium concentrations of elements, there are elements heavier than iron, but in extremely small concentrations, which become negligible for the elements much heavier than germanium.

Significant quantities of heavy elements cannot be produced by collisions between nuclei in a star, because they are destroyed in later collisions faster than they are produced.

So most of the elements heavier than germanium are produced by a different mechanism, i.e. by neutron capture, followed by beta decay. A small number of the heavy nuclei produced by neutron capture also capture protons after their formation, producing thus also some isotopes that are richer in protons.

In normal stars, the number of neutrons is negligible so neutron capture reactions do not happen often. On the other hand, some catastrophic events, like a supernova explosion or the collision between two neutron stars, can produce huge amounts of neutrons. In this case a lot of neutron capture reactions happens, exactly like on Earth during the explosion of a nuclear fission or fusion bomb.

These neutron capture reactions can produce all the chemical elements until fermium (Z=100), i.e. well beyond uranium. Heavier elements than that are not produced, because they fission spontaneously too quickly, before being able to capture other neutrons.

Of the trans-uranium elements, most decay very quickly, but plutonium 244 has a half-life long enough to reach other stellar systems, together with uranium, thorium, bismuth and all elements lighter than bismuth, except technetium and promethium (the latter 2 elements decay quickly, but technetium can survive for a few tens of millions of years, so small quantities of it may reach a nearby star, but they will disappear very soon after that; the elements between bismuth and thorium, and also protactinium, decay quickly and those that exist on Earth are recently created, through the decay of Th and U). The other primordial elements can survive many billions of years, but the amount of primordial plutonium becomes negligible after a few billions of years.


Put OS calls on the bus. IMHO we need to add permissions to most user-space APIs so apps don't need to be sandboxed in a VM for security. Is this what SE Linux is? But I sometimes want permission to be granted by the user. For example, I don't want programs to be able to access files unless the user specifies the file - this might be through direct interaction with a GUI, or if we're really smart/tricky enabled if a filename comes on the command line typed by the user. I hope this makes sense, a way to gate system access via normal user input. Is this a reasonable possibility, or am I dreaming too big?

I think the way to do should be capability-based security. However, that is suitable for a new operating system design (and computer design, too, for some reasons).

For Linux, we can do something else, although something similar may be possible. However, it seems that seccomp does not allow the function to send and receive file descriptors, nor to wait for one of any file descriptors in a set (like the "select" function), etc, so it is rather limited, and will require another process to proxy all of these functions. (Wikipedia says seccomp also disables RDTSC; my own system design would not even have such a thing, because I would want to restrict all I/O including high-precision timing; but I would also want to restrict CPUID and stuff like that too.) Capsicum might be better, at least for BSD (although I don't know if it disables RDTSC or CPUID).

I had thought of making a sandbox library tha should not require many changes to the program (although some changes will be needed); this can be used to specify the permissions needed involving files, popen, command-line arguments, network functions, timing, etc, and functions to request input in various character sets, and to request other things as well such as file names, and the host name and port number when connecting to internet, etc.


This is more or less how a lot of it works on macOS via the “Transparency, Consent, and Control” subsystem. Even non-sandboxed apps cannot just go rooting around my Desktop without the OS throwing a popup up asking me if it’s ok.

Isn't the majority of SaaS in ERP systems?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: