Hacker Newsnew | past | comments | ask | show | jobs | submit | iinnPP's commentslogin

Not that person, but yes. You have entirely missed the ability to simply view and understand what's inside your own body.

Where your interpretation means someone else needs to follow your whim for their own problem, despite the legalese stating otherwise.

I think that is an absurd position and I am sorry to feel the need to have to be blunt about it.


I recently had to deal with a ministry in Canada, where a worker who had been there since 20 years ago failed even a basic test of competence in reading comprehension. Then multiple issues with the OPC (Office of Privacy Commissioner) failing entirely on a basic issue.

Another example exists in Ontario's tenant laws constantly being criticized as enabling bad tenant behavior, but reading the statute full of many month delays for landlords and 2 day notices for tenants paints a more realistic picture.

In fact, one such landlord lied, admitted to lying, and then had their lie influence the decision in their favor, despite it being known to be false, by their own word. The appeal mentioned discretion of the adjudicator.

Not sure how long that can go on before a collapse, but I can't imagine it's very long.


Incompetence is a taboo. It shouldn't be.

I think it should be perfectly OK to make value judgements of other people, and if they are backed by evidence, make them publicly and make them have consequences for that person's position.


A recent review of one of Canada's Federal Institutions showed the correct advice was given 17% of the time[0]. 83% failure rate. Not a soul has been fired unless something changed recently.

I do agree however with your assessment because any (additional) accountability would improve matters.

[0] https://globalnews.ca/news/11487484/cra-tax-service-calls-au...


This is a definition of spam, not the only definition of spam.

In Canada, which is relevant here, the legal definition of spam requires no bulk.

Any company sending an unsolicited email to a person (where permission doesn't exist) is spamming that person. Though it expands the definition further than this as well.


I think the point being made is that the graphs don't show real world applications progress. Being 99.9999999% or 0.000001% of the way to a useful application could be argued as no progress given the stated metric. Is there a guarantee that these things can and will work given enough time?

> Is there a guarantee that these things can and will work given enough time?

Quantum theory predicts that they will work given enough time. If they don't work, there is something about physics that we are missing.


Quantum theory says that quantum computers are mathematically plausible. It doesn't say anything about whether it's possible to construct a quantum computer in the real world of a given configuration. It's entirely possible that there's a physical limit that makes useful quantum computers impossible to construct.

Quantum theory says that quantum computers are physically plausible. Quantum theory lies in the realm of physics, not mathematics. As a physical theory, it makes predictions about what is plausible in the real world. One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.

The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.


> One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.

Quantum theory doesn't predict that it's possible to build a large scale quantum computer. It merely says that a large scale quantum computer is consistent with theory.

Dyson spheres and space elevators are also consistent with quantum theory, but that doesn't mean that it's possible to build one.

Physical theories are subtractive, something that is consistent with the lowest levels of theory can still be ruled out by higher levels.


Good point. I didn't sufficiently delineate what counts as a scientific problem and what counts as an engineering problem in QC.

Quantum theory, like all physical theories, makes predictions. In this case, quantum theory predicts that if the physical error rate of qubits is below a threshold, then error correction can be used to increase the quality of a logical at arbitrarily high levels. This prediction can be false. We currently don't know all of the potential noise sources that will prevent us from building a quantum logic gate that is of similar quality as a classical logic gate.

Building thousands of these logical qubits is an engineering problem similar to Dyson spheres and space elevators. You're right that the lower levels of building 1 really good logical qubit doesn't mean that we can build thousands of them.

If our case, even the lower-levels haven't been validated. This is what I meant when I implied that the project of building a large-scale QC might teach us something new about physics.


> The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.

If "this experiment" is trying to build a machine, then failure doesn't give much evidence against the theory. Most machine-building failures are caused by insufficient hardware/engineering.


Quantum theory predicts this: https://en.wikipedia.org/wiki/Threshold_theorem. An experiment can show that this prediction is false. This is a scientific problem not an engineering one. Physical theories have to be verified with experiments. If the results of the experiment don't match what the theory predicts then you have to do things like re-examine data, revise the theory e.t.c.

But that theorem being true doesn't mean "they will work given enough time". That's my objection. If a setup is physically possible but sufficiently thorny to actually build, there's a good chance it won't be built ever.

In the specific spot I commented, I guess you were just talking about the physics part? But the GP was talking about both physics and physical realization, so I thought you were also talking about the combination too.

Yes we can probably test the quantum theory. But verifying the physics isn't what this comment chain is really about. It's about working machines. With enough reliable qubits to do useful work.


You're right. I didn't sufficiently separate experimental physics QC from engineering QC.

On the engineering end, the question on if a large-scale quantum computer can be built is leaning to be "yes" so far. DARPA QBI https://www.darpa.mil/research/programs/quantum-benchmarking... was made to answer this question and 11 teams have made it to Stage B. Of course, only people who believe DARPA will trust this evidence, but that's all I have to go on.

On the application front, the jury is still out for applications that are not related to simulation or cryptography: https://arxiv.org/abs/2511.09124


Sounds like a pursuit where we win either way

Publishing findings that amount to an admission that you and others spent a fortune studying a dead end is career suicide and guarantees your excommunication from the realm of study and polite society. If a popular theory is wrong, some unlucky martyr must first introduce incontrovertible proof and then humanity must wait for the entire generation of practitioners whose careers are built on it to die.

Quantum theory is so unlikely to be wrong that if large-scale fault tolerant quantum computers could not be built, the effort to try to build them will not be a dead end, but instead a revolution in physics.

Unless the overall cost is too high, but yes it's definitely worth pursuing as far as we currently know.

I prefer reading the LLM output for accessibility reasons.

More importantly though, the sheer amount of this complaint on HN has become a great reason not to show up.


> I prefer reading the LLM output for accessibility reasons.

And that's completely fine! If you prefer to read CVEs that way, nobody is going to stop you from piping all CVE descriptions you're interested in through a LLM.

However, having it processed by a LLM is essentially a one-way operation. If some people prefer the original and some others prefer the LLM output, the obvious move is to share the original with the world and have LLM-preferring readers do the processing on their end. That way everyone is happy with the format they get to read. Sounds like a win-win, no?


Yes, framed as you stated it is indeed a win-win.

However, there will be cases where lacking the LLM output, there isn't any output at all.

Creating a stigma over technology which is easily observed as being, in some form, accessible is expected in the world we live. As it is on HN.

Not to say you are being any type of anything, I just don't believe anyone has given it all that much thought. I read the complaints and can't distinguish them from someone complaining that they need to make some space for a blind person using their accessibility tools.


> However, there will be cases where lacking the LLM output, there isn't any output at all.

Why would there be? You're using something to prompt the LLM, aren't you - what's stopping you from sharing the input?

The same logic can be applied in an even larger extent to foreign-language content. I'd 1000x rather have a "My english not good, this describe big LangChain bug, click <link> if want Google Translate" followed by a decent article written in someone's native Chinese, than a poorly-done machine translation output. At least that way I have the option of putting the source text in different translation engines, or perhaps asking a bilingual friend to clarify certain sections. If all you have is the English machine translation output, then you're stuck with that. Something was mistranslated? Good luck reverse engineering the wrong translation back to its original Chinese and then into its proper English equivalent! Anyone who has had the joy to deal with "English" datasheets for Chinese-made chips knows how well this works in practice.

You are definitely bringing up a good point concerning accessibility - but I fear using LLMs for this provides fake accessibility. Just because it results in well-formed sentences doesn't mean you are actually getting something comprehensible out of it! LLMs simply aren't good enough yet to rely on them not losing critical information and not introducing additional nonsense. Until they have reached that point, their user should always verify its output for accuracy - which on the author side means they were - by definition - also able to write it on their own, modulo some irrelevant formatting fluff. If you still want to use it for accessibility, do so on the reader side and make it fully optional: that way the reader is knowingly and willingly accepting its flaws.

The stigma on LLM-generated content exists for a reason: people are getting tired of starting to invest time into reading some article, only for it to become clear halfway through that it is completely meaningless drivel. If >99% of LLM-generated content I come across is an utter waste of my time, why should I give this one the benefit of the doubt? Content written in horribly-broken English at least shows that there is an actual human writer investing time and effort into trying to communicate, instead of it being yet another instance of fully-automated LLM-generated slop trying to DDoS our eyeballs.


I completely agree I prefer the original language as it offers more choice in how to try to consume it. I believe search engines segment content by source language though, so you would probably not ever see such content in search results for English language queries. It would be cool if you could somehow signal to search engines that you are interested in non-native language results. I don’t even tend to see results in the second language in my accept languages header unless the query is in that language.

Im sorry but I don't buy the argument that we should be accepting of AI slop because it's more accessible. That type of framing is devious because you frame dissenters as not caring about accessibility. It has nothing to do with accessibility and everything to do with simply not wanting to consume utterly worthless slop.

People generally don't actually care about accessibility and it shows, everywhere. There is obvious and glaring accessibility gains from LLMs that are entirely lost with the stigma.

Well, no.

Because authors do two things typically when they use an LLM for editing:

- iterate multiple rounds

- approve the final edit as their message

I can’t do either of those things myself — and your post implicitly assumes there’s underlying content prior to the LLM process; but it’s likely to be iterated interactions with an LLM that produces content at all — ie, there never exists a human-written rough draft or single prompt for you to read, either.

So your example is a lose-lose-lose: there never was a non-LLM text for you to read; I have no way to recreate the author’s ideas; and the author has been shamed into not publishing because it doesn’t match your aesthetics.

Your post is a classic example of demanding everyone lose out because something isn’t to your taste.


Thank you for your post, it's more elegant than my explanation and makes good arguments.

Sometimes I question my sanity these days when my (internally) valid thoughts seem to swoosh by externally.


Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.

This happens at Walmart Canada all the time. The policy there is to slash 10$ off shelf price (or free for anything 10 or less).

Since COVID, Walmart has stopped having immediate fixes of the problem.

Since 2020, I have accumulated about $1200 in free merchandise using the above. Almost always food.


Publix in the southeast US will give you anything that rings up wrong for free. I shopped there for 20+ years and only remember getting a handful of things free.


I'm in Toronto and I've never had anything ring up incorrectly at Wal-Mart. I can't recall ever having anything ring up incorrectly anywhere else, either. There have maybe been a couple of times where a sale price didn't apply to all the SKUs I thought it did.


The disastrous Target Canada (iirc) was similar; obviously nobody cared at all.


It's possible to live on social assistance and build 3D models and offer them for free. So willpower seems more relevant than wealth.


Let's not forget mystery boxes for real toys and things like mini brands.

Though I am not outspoken about it, I think individuals need to come to terms with telling themselves no.

Otherwise we need to outlaw everything bad and open to abuse to specific individuals. Things such as cake, donuts, coffee, etc.


I think we can ban companies selling packages without disclosing exactly what is in those packages. I think we can regulate companies in that way without finding ourselves hopelessly slipping down some silly slope.


I can totally see EU making unwanted "dark" products returnable for full refund. I understand that already applies to anything that tries to force contracts terms on you after the purchase: you can choose not to agree and get a full refund.


> Though I am not outspoken about it, I think individuals need to come to terms with telling themselves no.

This really resonates with me. I feel like self-control has gone out of fashion, but it has a lot of merit.


I think it's difficult to just call things "self control" when there have been entire college majors / studies / casinos dedicated to tricking us into making the choices they want.

Look at the Apple price ladder on ipads. Look at any tactic by a casino - go to Reno and see many retires at the beginning of the month drop their whole social security check in the casino. Look at why they label things $9.99 instead of $10.00 Look at why they put all the overpriced candy at the cash register in a super market. Look at how they create junk food to be "perfect" and addictive source: https://archive.globalpolicy.org/world-hunger/trade-and-food... I have a lot of friends that stopped playing gacha games because they would come home drunk - the game would incentivize you to login - and then blow more money than they truly wanted to.

At some level it's unfair to say we should just "have self control" when you have entire academic institutions and entire industries figuring out how to get you to "crack" and make a bad decision that favors their pocket book.

So yeah - I agree - we need more self control - but it's being purposefully assaulted every second of our day by EVERYTHING.


Yeah, existing in the modern world you're surrounded by mind-hackers. Everywhere you go there are hacking attempts against your mind, trying to get you to buy stuff you shouldn't or want stuff you don't. It's really absurd.


Well then regulation should help. And people should stop doing outright stupid things - you have no reason to be in casino, in same way you have no reason lighting that cigarette or doing another round of binge drinking (or those gacha games, had to google WTF that is, same mind cancer as the rest, no thank you). You, nor me are not stronger than those addictions. Billions of miserable poor fuckers before us are proof enough, learn from their mistakes.

Attack from both sides, heck all sides - from the top with regulation. From the bottom by being mentally more resilient, there are endless ways to get there - ie do rock climbing (yes, not joking, it will change you for the better for good if you stick long enough). Or other sports and activities that challenge you, your fears, your laziness, push yourself physically. Do it 10 times and something clicks in the mind and it goes almost on its own afterwards.

Another angle - shame those working in such business. Goes for fuck ton of FAANGS and many others. I know its blurry and whatever else of an excuse will fly around, don't care. Have a clearly moral work or accept shame, or change for the better.

Its a terrible situation but by far the biggest mistake is throwing hands in the air and giving up immediately just because some greedy sociopathic billionaire wants a bigger yacht or rocket to compensate even more for their fucked up childhood, and thus pushes a lot of psychology phds against you. You don't have to even start to play that game, not even for a second. We are stronger, much stronger than that and real good life (TM) is not about anything digital in any way.


That's because it mostly doesn't work long term.

Depending on how your brain got wired, self-control condemns you to a life of misery while not being exposed allows you to live a normal life. Of course you cannot ask for societal experience to be tailored just for you but there seem to be a consensus on protecting the most vulnerable people from the most destructive habits. Where to draw the line is for everyone to find agreement upon and if that's not good enough for you, you need to find a safe haven.

Self-control is like a tourniquet on a severed leg, it can buy you time but you need an hospital at some point


Huh?

Most people have perfectly well avoided blowing all their money on baseball card packs or whatever other random "box of randomized items" without enduring a life of misery...

It's not that hard.


> Depending on how your brain got wired

Most people are lucky that their brain is cabled somewhat sanely


If self control were reliable we wouldn't need seatbelts, antilock brakes, bumpers, and other safety mechanisms. We would all just drive safely all the time. But that would be silly. Self control is not as simple and reliable as we want it to be.

Sometimes systematic solutions are better.


I agree that humans are fallible, but the analogy is still off despite being catchy, yet flawed. Seatbelts are passive mechanical systems; self-control is a complex, context-dependent cognitive function. Conflating the two oversimplifies how human behavior actually works.


Things that cease to be surprising can also cease being important. Which is made clear reading the remainder of the post.

It's my take as well, frankly.


This service problem is fixed like most media-related service problems. Sailing.

MPC has the ability to normalize volume in a video automatically.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: