Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are two kinds of computing - precision computing and probabilistic computing. For example, cryptography falls into precision computing. There is no room for being incorrect even by a single bit. Where as machine learning is about getting a range of answers, with tolerance for error.

I like to visualize them as cuts and spans in a continuum, such as a number line. They make up the full picture. One exists only because of the other. One can't do the job of the other and one is defined only in terms of the other.

Banks wouldn't use AI to compute the account balance after a transaction or for authenticating a customer. Network software wouldn't use AI for encryption and decryption of the TLS traffic. Also, banks wouldn't mind a x% error in computation of a credit rating, fraud detection or industry trends analysis.

Writing code is a probabilistic task with many variations possible, while the work done by the code during runtime, is a precision task, in most of the cases.





> For example, cryptography falls into precision computing. There is no room for being incorrect even by a single bit. Where as machine learning is about getting a range of answers, with tolerance for error.

Doesn't both of them rely on randomness in real use cases/usage? And it's only once you have fixed seeds that cryptography becomes deterministic, and then you can make the same claim for most of ML, when the seeds are fixed you get fixed replies.

It happens to be that most people seem to use LLM clients that aren't deterministic, as they're using temperature + random seeds for each inference, but that doesn't mean someone couldn't do it in a different way.


Fixing the seed wouldn't necessarily make LLMs deterministic. LLMs do lots of computation in parallel and the order in which these computations are performed is often indeterministic and can lead to different final results.

Yep. And to answer the question about randomness - it's absolutely vital to have a good source of noise to obscure the underlying pattern to prevent the secret information leaking - but the mathematical part that manipulates that noise into the encrypted output has to be precise. That's the distinction made here relating to probability.

Disclaimer: Not a crypto expert. Just like reading about it. Check actual sources for a better insight. Very interesting technology and much smarter people working in this field who deserve a lot of praise.


> Banks wouldn't use AI to compute the account balance after a transaction or for authenticating a customer. Network software wouldn't use AI for encryption and decryption of the TLS traffic.

Not directly, no. But they might use AI to write the code that computes account balance, or authenticates a user, or encrypts/decrypts TLS.


I would argue that there are already quite a few slow-moving corporate procedures in place for the exact reason of ensuring correctness.

Especially when financials are on the line, it's not like they don't have the money to ensure excruciatingly painful amounts of scrutiny here.

I did note that you said "might". So, I would hope not but I've seen things so maybe you're upsettingly right haha


I was talking about "runtime" work. At runtime, the tasks I mentioned above, don't use AI to get work done. Coding ofcourse, falls under probabilistic task, as I mentioned.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: