Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regressions are not implied by that statement. A bug doesn’t exist in the human realm until a human observes it.


Oh, the Schrodinger's Bug.


Schrödinger's Hubris, actually.

The claim that a codebase of 420k lines contains "only one error" is of course absurd, and the members of this forum would laugh anyone out of the room who made such a claim about any other project, pointing out how they cannot possibly know, actual logical contradictions in the claims as described by GP, or just plain ridiculing it without further elaboration.

But since the code in question cannot meaningfully be tested by the public, and people have been indoctrinated to believe the myth that aerospace engineers are an entirely different species that doesn't make mistakes in the sense that the rest of the workforce does, the hubris is accepted until disproven, which it probably won't be for various practical reasons.

Nevermind that the Space Shuttle was a death trap that killed two crews, and that the associated investigations (especially for Challenger) revealed numerous serious issues in quality and safety management. People will continue to just nod their head when they hear nonsense like this, because that's what they have seen others do, and so Schrödinger's Hubris can live on.


maybe just labeling it journalistic license would be simpler and more accurate.

I doubt any of the people who actually write the code would stand by when that claim was made and not clarify it to 'we only found one bug'.


heisenbugs


“Program testing can be used to create 1 the presence of bugs, but never to show their absence”

1) edited


That would be the purpose of formal proofs, wouldn’t it?

Formal proofs may not be silver bullets, and we’re never safe from a faulty implementation of the proven algorithms, but this quanta article on a DARPA project showed impressive results [0].

There’s also AWS’ use of TLA+ [1].

[0]: https://www.quantamagazine.org/formal-verification-creates-h...

[1]: https://news.ycombinator.com/item?id=22082869


Formal proofs can only prove that the system matches a specification.

Many (most?) non-trivial bugs are actually flaws in the specification, misunderstandings about what exactly you wanted and what real-world consequences arise from what you specified.


"Beware of bugs in the above code; I have only proved it correct, not tried it." ---Donald Knuth


I will remember that one.


> Formal proofs may not be silver bullets, and we’re never safe from a faulty implementation of the proven algorithms

You also aren't safe from misunderstanding what it is that you've proven about the program.

Which is actually the same problem as other software bugs; you have a specification of something, but you don't know what it is that you've specified, and you wish it were something other than what it is.


Too true. When we translate complex concepts to code & math, either may inadvertently not be exactly what we wanted.

Interesting to think about a formal code verification system that maintained a connection between all four prime artifacts: natural language description of problem as understood & intended solution, vs. code and the properties actually verified.


That is wrong in general. With enough tests you absolutely can show the absence of bugs for certain programs. It is for example easy to test „hello world“ exhaustively.


I agree, but the word “certain” really needs to not be overlooked in this statement.


Spoken like someone who has never had to deal with undefined behavior corner cases.


I have written safety critical software.


I'm now scared.


Let's say you've tested this thing 1 million times. Each time the output was automatically checked by five different and independently-developed test suites. You're ready to swear there's no possible way for it to fail.

And then someone tries it with a ulimit of 16kB.

Does it run?

Do you _know_?

Do you even know what is correct behavior in this situation?


The system it runs on is part of the specification. A program is correct if fulfills all specified requirements. You're saying a car is defective because it breaks when you put sugar in the tank.


This is a no true Scotsman fallacy. Any time it runs incorrectly, it was invoked with the wrong operating conditions — which seem to be defined as any conditions that cause it to run incorrectly.

Sugar in the tank is an agreeable example because of how obvious it is, but what about something more subtle? An odd condition that leads to the wrong resonant frequency. An unknown software bug that makes the brakes lock up on Tuesdays at 12:00 AM on one specific backroad in a remote part of Virginia. The combinatorial possibilities of operating conditions are too numerous to exhaustively test.

I guess you could say that every function has every quality that it happens to have, so that functions need only exist in order to be “correct.”


You typically write down the specification before claiming that your code is done so you can’t use it to claim that any undesired behavior is not a bug. I naturally agree that in general tests are insufficient to show correctness because the state space is of impractical size, but, as I said above, for certain programs they totally can be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: