This is in no way intended to be disparaging: there are processes that work within the scale of small European nations that simply won't at larger scales.
> there are processes that work within the scale of small European nations that simply won't at larger scales
Coming from Ireland (tiny population, low pop density) I've heard this argument countless times (we're an obvious target for this critique), but I still to this day don't see the logic of it. At all.
Constituencies are sized per capita, count centres are staffed per capita, if you have higher pop-density you'll either have more observers at count centres, or the same number at more count centres. This is a distributed system - it's the definition of scalable.
Fwiw the last count I tallied at (Dublin MEP) had an electorate of 890k. It was the smallest constituency in Ireland in that election, but still bigger than the largest congressional district electorate in the US. We counted in one large open warehouse. There were 23 candidates & 19 separate repeating counts.
That could work in favour or against your argument - I don't really know - I don't really think it matters either direction though.
This doesn't make sense. In the same way that police, firefighters, ambulance, farmers, etc, can scale to any country population, so can ballot counting.
I see a lot of comments here expressing disapproval about assisted suicide.
I'd like to quote from the HN guidelines:
> Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.
With that said I urge you those who disapprove to ask whether you are being "rigidly negative" about this.
1. Is this disapproval perhaps coming from your religious context? If so, please pause and consider why that may not apply to the rest of us. And also whether you really think that your religious beliefs must be forced on the rest of us.
2. Is this disapproval coming from a sense of deep unease that this post causes? If so, know that this unease is shared by most of us. But try and muster the fortitude to go past that unease and consider the decision from a place of compassion.
My mum died earlier this year. In hospital, she was approved for assisted dying. There is a mandatory waiting period as part of the process.
Many/most of the nursing staff are Filipino and strongly Roman Catholic.
As she lay dying and unable to speak, one of the nurses undertook to convert her at this last minute to their religion. At night, alone, after all visitors had left, she would come into mum's room and press mum, a very committed atheist, to pray for her salvation.
It's hard to describe how vulnerable someone is who is stuck in their bed and dependant on the nursing team for everything, even sips of water.
I will say this was not representative of her care, but it opened my eyes to the lengths religious believers will go to to push their views on others.
On the contrary, I urge you to consider whether it is your statement that is overly dismissive. Is there perhaps some existing conditioning, maybe in the form of religious upbringing that is driving your reaction to this? Many of us in fact find OP's a very thoughtful comment than a "silly statement".
> By your logic we should kill everyone at their peak.
No, they suggested that the old and ailing whose quality of life has deteriorated to the point where there is no hope or no more joy in living, ought to be given the choice.
Let me end by quoting my favourite lines from the HN guidelines:
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
There is a red button that orders your euthanasia. Pressing it instantly teleports you to a euthanasia facility and leads to your death unless you say no within 30 seconds. The button reads your fingerprint and can only be pressed by you. (Assume science fiction level technology to make this true)
1. The button is located 5000 km away from you in an unknown location.
2. The location is known.
3. You can order the delivery of the button to you for $50
4. The button is in your basement
5. The button is next to your bed
6. The button is on your keyboard and mouse
7. The button is on your keychain
Now consider there is a blue button with the same rules as above, which makes you feel compelled to press the first button for a day and it can be pressed by anyone.
You'd want the red button as far away from you as possible and the blue button secured in a location that is as inaccessible to others as possible.
In today's society there are too many people obsessed with pressing blue buttons. Also, pressing blue buttons is not a crime, because red buttons happen to be pretty far away from most people.
But now there are people obsessed with pressing red buttons. They want to ship the red button to your house on your behalf, while thinking they are doing you a favor.
This would be okay if the blue button pressing people were a minority and there was a punishment for pressing blue buttons, but it turns out both positions are popular and when averaged together, the buttons will be placed next to each other, thereby turning the blue button into a second red button.
I see nobody obsessing about pushing red buttons. I see people that would like for option #3 to exist. And when death approaches, option #5.
A simple test of how people feel: Consider the twin towers. We saw quite a few people choosing jumping over fire. We do not question people making such a choice. It is the same choice, just on a much more compressed time scale.
(And we have the bonkers case out of WWII: the guy survived apparently uninjured. Someone who made the choice and was still around to ask them why. We don't know exactly what happened, no analysis was made at the time but attempting to reconstruct the situation said he probably hit the outer part of a pine tree and then rolled down a snowbank. He had on heavy clothing and had blacked out during the fall--not exactly surprising as he jumped from 18,000'.)
They are suggesting a man who is making life hard on others should die for society which I think is wrong. No one is saying that those who choose to die shouldn't have that choice rather it's not society who should be making the choice.
PM (at the next sprint meeting): "So, Matt, for your next story here's this 48000 line code base that I vibe coded for the new vendor interop feature you said would be difficult to implement correctly.
Of course this is a standalone page written in some language that I forget. I think Cursor mentioned some animal name... anyway. Can you please put this into our product please?"
And for those who can, Appscript gives your spreadsheet super powers.
For those who don't know, you are not stuck with writing JS in the Appscript integrated web IDE that comes with Google sheets (though honestly it's not too bad itself).
Using clasp, you can develop your code locally in an IDE of your choice, in typescript and have a build step compile those to js, and have clasp push it to spreadsheet.
Once you have the tool chain set up the DX is quite nice.
I spent some time with Apps Script a few weeks ago. It has some strange design decisions:
1) Everything runs on the server, including triggers and even custom functions! This means every script call requires a roundtrip, every cell using a custom function requires a roundtrip on each change, and it feels much slower than the rest of the UI.
2) You can't put a change trigger on a cell or subset of cells, only on the whole sheet. So you have to manually check which cell the trigger happened on.
3) Reading and writing cell values is so slow (can be a second or more per read or write) that the semi-official guidance is to do all reads in a bunch, then all writes in a bunch. And it's still slow then.
4) A lot of functionality, like adding custom menus, silently doesn't work on mobile. If your client wants to use Sheets on mobile, get ready to use silly workarounds, like using checkboxes as buttons to trigger scripts and hoping the user doesn't delete them.
Overall I got the feeling that Google never tried to "self host" any functionality of core Sheets using Apps Script. If they tried, it'd be much faster and more complete.
> 2) You can't put a change trigger on a cell or subset of cells, only on the whole sheet. So you have to manually check which cell the trigger happened on.
This is true of MS Excel's scripting language (VBA) as well. Worksheets are objects with events; cells are objects without (VBA-accessible) events.
But Google Sheets remote procedure calls are vastly slower than local OLE/COM dispatching. (And VBA/Excel presumably uses the optimized tighter COM interface binding instead of the slower high level COM IDispatch. Sure there's some overhead but it's nothing compared to Google Sheet's network overhead.)
Not only is scripting Google Sheets indeterminently and syrupy slow, it also imposes an arbitrary limit on how long your code can run, making a lot of applications not just inefficient but impossible. Running your code in google's cloud doesn't make spreadsheet api calls any faster, it just limits how long you can run, them BAM!
To get anything non-trivial done, you have to use getSheetValues and ranges to read and write big batches of values as 2d arrays.
It's easier to just download the entire spreadsheet csv or layers and bang on that from whatever language you want, instead of trying to use google hosted spreadsheet scripts.
> Everything runs on the server, including triggers
I think that’s a consequence of the fact that multiple users can simultaneously edit a sheet. Yes, Google could special-case the “you are the single user of this sheet” case, but that’s extra work, and, I think, would be fairly complicated when handling edge cases where users frequently start and stop editing a sheet.
> I think that’s a consequence of the fact that multiple users can simultaneously edit a sheet.
No, it's not. Built-in functions like SUM recalculate instantly, and custom formatting rules (e.g. "color green if above zero") get applied instantly, even when there are multiple users editing a sheet. Running custom functions and triggers on the server is just a decision they made.
This reason doesn't make much sense to me. Let's say I write a non-idempotent custom function. It makes the spreadsheet behave weirdly: recalculating a cell twice leads to a different effect than recalculating it once. Does it matter whether the function runs on the server or the client? No, the spreadsheet will behave weirdly in either case, even with just one user.
Can we make a programming language that will save developers from that? Maybe, but that would be very hard and that's not what Apps Script is trying to do. It already allows non-idempotence, trusting developers to write idempotent code when they need to. So it could run on the client just fine.
Or use the API to program in anything you want. We use Google Sheets for our accounting system, loading data via bank APIs and a cron-driven python script. We used to use Xero, but it couldn't handle the different tax regimes we operate in.
That looks like a great usecase. Would you be able to write about the architecture. A lot of us would love to be able to do things like this in Sheets, I'm personally trying to integrate a forecast estimate into Sheets
Haven't written C in a while but I think this program has an integer overflow error when you input 2 really large integers such that the sum is more than a 32 bit signed integer.
Also I believe in entering null values will lead to undefined behaviour.
I'm not sure how showing that gp can't even write a dozen lines of memory safe C proves that doing so for the exponentially harder 100+k LoC projects is feasible.
The program contains potential use of uninitialized memory UB, because scanf error return is not checked and num1 and num2 are not default initialized. And a + b can invoke signed integer overflow UB. A program with more than zero UB cannot be considered memory safe.
For example if the program runs in a context where stdin can't be read scanf will return error codes and leave the memory uninitialized.
num1 and num2 are declared on the stack and not the heap. The lifetimes of the variables are scoped to the function and so they are initialized. Their actual values are implementation-specific ("undefined behavior") but there is no uninitialized memory.
> And a + b can invoke signed integer overflow UB. A program with more than zero UB cannot be considered memory safe.
No, memory safety is not undefined behavior. In fact Rust also silently allows signed integer overflow.
Remember, the reason memory safety is important is because it allows for untrusted code execution. Importantly here, even if you ignore scanf errors and integer overflow, this program accesses no memory that is not stack local. Now if one of these variables was cast into a pointer and used to index into a non-bounds-checked array then yes that would be memory unsafety. But the bigger code smell there is to cast an index into a pointer without doing any bounds checking.
That's sort of what storing indexes separately from references in a lot of Rust structures is doing inadvertently. It's validating accesses into a structure.
Regarding initialization, if one wants portable code that works for more than one machine+compiler version, it's advisable to program against the C++ virtual machine specified in the standard. This virtual machine does not contain a stack or heap.
Generally your comment strikes me as assuming that UB is some kind of error. In practice UB is more a promise the programmer made to never do certain things, allowing the compiler to assume that these things never happen.
How UB manifests is undefined. A program that has more than zero UB cannot be assumed to be memory safe because we can't make any general assumptions about its behavior because. UB is not specified to be localized it can manifest in any way, rendering all assumptions about the program moot. In practice when focusing on specific compilers and machines we can make reasonable localized assumptions, but these are always subject to change with every new compiler version.
Memory safety is certainly critical when it comes to exploits, but even in a setting without adversaries it's absolutely crucial for reliability and portability.
> In fact Rust also silently allows signed integer overflow.
Silently for release builds, and panic in debug builds. The behavior is implementation defined and not undefined, in practice this is a subtle but crucial difference.
Take this example https://cpp.godbolt.org/z/58hnsM3Ge the only kind of UB AFAIKT is signed integer overflow, and yet we get an out-of-bounds access. If instead the behavior was implementation defined the check for overflow would not have been elided.
We keep mocking and laughing at the "internet Thomas Jefferson"s of the world but they seem to be getting increasingly prescient about the dystopian world where we are giving bad actors disproportionate control over our lives on the pretext of keeping us or children safer.
I will agree with your point, and will also say a lot of the "bad actors" are actually in the house here. So don't take anything on face value. Hacker news has some straight computer criminals, adware types, cryptobros, dubious startup types, whoever is vibe-coding these crawlers, and etc. So of course they all believe in "maximum freedom" (to scam people).
> Let's do the math. If each step in an agent workflow has 95% reliability, which is optimistic for current LLMs,then:
5 steps = 77% success rate
10 steps = 59% success rate
20 steps = 36% success rate
Production systems need 99.9%+ reliability.
(End quote)
Isn't this just wrong?
Isn't the author conflating accuracy of LLM output in each step to accuracy of final artifact which is a reproducible deterministic piece of code?
And they're completely missing that a person in the middle is going to intervene at some point to test it and at that point the output artifact's accuracy either goes to 100% or the person running the agent would backtrack.
Either am missing something or this does not seem well thought through.
How is it that the final result is a reproducible deterministic piece of code, when the prompts become the "source code" itself, and the underlying model used is constantly changing (being updated), which is equivalent to your programming language changing its semantics every other day and refusing to tell you exactly what has changed (because they can't). Not to mention the nondeterminism that a lot of times is present due to nondeterministic order of evaluation when parallelizing?
He's not wrong. The numbers are too pessimistic, however when building software the numbers don't need to be as high for a complete disaster to happen. Even if just 1% of the code is bad, it is still very difficult to make this work.
And you mention testing, which certainly can be done. But when you have a large product and the code generator is unreliable (which LLMs always are), then you have to spend most of your time testing.
Did you even finish the article? The end is all about the trade-off of when "a person in the middle is going to intervene".
In fact, the point of the whole article isn't that AI doesn't work; to the contrary, it's that long chains of (20+) actions with no human intervention (which many agentic companies promise) don't work.
reply