> This sounds less convenient, harder to implement, and no more secure than OpenID
In what way is it less convenient? A standard user has their phone with them...24/7? At least in the sms realm it's more convenient than trying to come up with a password that has: A capitol letter, a number, a special character, a lower case letter. Also way more secure, a user gets sent a message of a one time code looking like 037.820.374.839 the time it would take to guess that, the one time code would have been timed out and the hacker would have been no closer to getting in compaired to a static password.
Not an idiot :) You make a good point that people don't have their phones on them & alive all the time which is where totp can come in with dongle totp's (like http://www.securemetric.com/secureotp-time.php) agreed it costs you a fair amount but if you want secure when you don't have your phone... it's worth it. And then maybe like google have a few longer random passwords that are to use when you don't have your phone or a TOTP/OTP generator.
> On both OSX and many Linux distros, Py 2 comes pre-installed but Py 3 does not.
Because python 3 was not the most stable at the time of distribution of that operating system. Why would I the developer of said operating system release anything but the most stable versions of the language? This would in turn make my operating system at times less stable.
I totally agree. However, until Py3 is packaged standard it's always going to be easier to run Py2 code. For example, I have a friend who does data analysis on OSX and occasionally she gets data in a format that is ugly. I take a look at it and send her a python script that will massage it into something nice.
She's ok with the occasional "sudo pip install ..." to get a library, but if my instructions started with "ok, first install Xcode and then install homebrew" the answer would probably be "it's ok I'll just do it by hand"
Simple script like that are unlikely to be much different between Python 2 and 3 unless you are using specific libraries. Just make make all the "prints" into "prints()" and "Exeption, e" into "Exception as e", and I can't see what the differences would be.
Just because this alternative language would have avoided this bug does not mean a much worse but would not have been created with a higher level language or even ATS.
Well, people can write bad/unsafe code in any language. But ATS can remove entire classes of bugs from a program, while C is notorious for its lack of safety. Though obviously, it has no more built-in protection from side-channel attacks than C.
Basically the cut down to everything is the laziness of how it is developed and how many people are actually looking over the entire code. ATS allows the developers to know that the can be even more lackadaisical about coding as ATS will remove bugs for them...
"There are collisions because people don't pay enough attention. Imagine if we installed a collision detection system, people would pay even less attention!"
It's a shame to see that comment has been downvoted. That's a quote directly from the bottom-right corner of the Rust website itself!
Rust is promising, without a doubt. But it's not yet truly usable in the same sense that C, C++, Java, Python, Haskell, Go and so many other languages are.
Maybe it'll start to get to that point once 1.0 is released, once we see at least some language and library stability, and then perhaps some adoption. But that just hasn't happened yet.
I downvoted it because "LOL" is not the kind of comment I'd like to see here. The point could have been made in a more substantial way. Like you just did.
Wow sorry I can't laugh at something jeez. So, you have never in your life just felt like re-posting a quote off something and just added a little something to it to show the spirit in which it was meant to be. Now you are just being nitpicky and to be honest rude in a sense. I have just joined this community I am trying to fit in and you just come along and see the comment and you "don't like it" because it's short, sweet and too the point. I am laughing at the comment of the programmer of rust for the quote he put on his site and now you have just totally bashed me because you felt it necessary to not like my simplistic comment. Wow.
> you have never in your life just felt like re-posting a quote off something ...
I do, but I do that on Twitter, because, as you've found out, HN will downvote you into oblivion.
> I have just joined this community I am trying to fit in
Ah ha! Sorry, I didn't see that: usually, new users are in green. (also, your account is 163 days old?) If you haven't checked it out, you should check out the community guidelines: http://ycombinator.com/newsguidelines.html
For what it's worth, I am not trying to 'bash [you]'... but I don't think this was a great comment. Try to keep them more substantial here. Different forums are appropriate for different kinds of discourse, and short little comments are generally not taken very well here.
The same happens to "+1", "thanks", and "interesting!" comments. If you can't write more than two sentences, you probably shouldn't post.
What some one - I think kibwen - has brought up is that early adopters can benefit in the sense that the language design is still in flux. So these early adopters can uncover weaknesses in the design, before they get to the stage where they have to consider backwards compatibility.
So although early adopters might not get any useful software out of learning Rust at this stage, they might indirectly improve their future Rust code by having a small influence on the direction of the language.
If you are speaking about execution speed, you got the idea wrong. From a quote in the article:
> If you use the high level typing stuff coding is a lot more work and requires more thinking, [...] (but) you can even hope for better performance than C by elision of run time checks otherwise considered mandatory, due to proof of correctness from the type system. Expect over 50% of your code to be such proofs in critical software and probably 90% of your brain power to go into constructing them rather than just implementing the algorithm. It's a paradigm shift.
The idea is to formally prove that the code is not doing unexpected things. The process is relatively simple to understand:
First you define the assumptions you make about the program, its execution environment, and the acceptable/expected results of your program. This is known as "formal specification" of the program. It is a critical part. If your specification is wrong, then the whole approach breaks down. However, this part should be much smaller than your whole codebase, and hence you can be extra careful on it.
Next, using this specification, you write proofs showing that the code can not do anything unintended (such as accessing a buffer out of its valid range). The compiler goes through this proofs and checks that everything is provably correct (according to the specification). Then it can generate code without runtime checks that you would otherwise probably implement, because it is sure that certain things cannot happen. As a result, the code may end up being actually faster.
Although a bit involved, the idea should be pretty intuitive. It is exactly what you are doing in your mind when programming. The main differences are:
1. We humans are pretty comfortable working with inexact and/or incomplete specifications. Then some undefined behavior happens, and our programs bug out. For instance, it is very easy for us to think about the division operator as something that always yields a value, ignoring the "division by zero" edge case. Computers are not, and force you to specify what exactly should happen when you encounter such edge cases.
2. We are also pretty bad at exhaustively checking every possibility, whereas computers excel at it. With the help of human-written proofs, obviously (otherwise verifying a program would involve checking every possible input for it, which is obviously intractable).
TL;DR: The tradeoff here is between development and compilation speed versus correctness, which implies improved security and execution speed.
The type safety shown in the article doesn't come at a speed cost. The times are erased during code generation. The generated C code is much like hand crafted C but with the safety confirmed via the types.
I read the article is does not mention speed, performance once, in which it had nothing to do with what I stated. I was simply stating that higher level languages will cause the library to be slower also less easy to be used by other high level languages like python.
I do not understand why people keep pushing unsafe code when computers keep getting faster and we have more and more headroom (cpu, memory, bandwidth). There is no excuse to keep running unprovable crypto.
Maybe because most of the code currently out there being use by the biggest companies in the world still use these "unsafe" languages. & tons of the job market still is in these "unsafe" languages.
I think you should read the article again. The language in question isn't higher level really it just has compile time type checking, which has no overhead.
That's still not my point. At the time of starting openssl I don't believe that ATS was around. In any case my point is that back then C lang was the best choice for performance and still is revered as the "fastest" as `nearly` all other languages are written on top of it either directly or indirectly. In any case I would love to see someone tell all of the openssl community to just drop C and switch to a different language.
C being loved in the UNIX community is one of the primary reasons that these libraries are in C.
Ada has been around since the beginning of the eighties, has and had performance that is near that of C, does not use a garbage collector, provides C linkage, and is far more safe than C.
If you do allow garbage collection, there were many performant and safe alternatives in the 90ies, such as ML.
'Higher level' doesn't necessarily say anything. Rust and ATS are both higher level than C, but they can both do everything that C does.
Is Ada less performant than C? I know it has bounds checking, but that can be turned off for "shipped" software. Does it have some features that incur a runtime cost and that can't be disabled?
Ada with all the runtime features left on is slower than comparable quality C. It's faster than most languages though.
With all the runtime turned GNAT can/should produce code with in a percent or two as fast as GCC (they share the same backend).
And Ada has a thing called SPARK which is a set of compiler checks to formally verify your code so you can provably turn off those runtime features safely. https://en.wikipedia.org/wiki/RavenSPARK
So rust & ATS & ADA & higher level languages can modify memory space? That I am aware of most higher level languages stray from being able to modify memory space on purpose as it's dangerous but, someone has to do it for the operating system is all I am saying about low level now that we are completely off topic here.
Don't know about Ada, but Rust and ATS can. The Rust code would need to be written in an unsafe block in order to as freely modify memory as C, but regular Rust can still do a lot without requiring automatic memory management, safely.
As you can see from the article (not that you seem to have read any of it), ATS can express C, and optionally prove low-level stuff about it.
This isn't a fundamental dilemma like the consistency/scalability dilemma of databases. This is (or was) just a limitation of languages and compilers. The arguments for using C are many but in this case the most common involve the need for low level access (for perf, timing)
C is certainly very much suited for some parts of an SSL implementation e.g. when you need absolute deterministic performance to avoid timing attacks etc. (Although performance should certainly be good enough with modern compilers for most languages, and avoiding side-channel attacks by having deterministic execution time is also possible without resorting to C).
Using the execution speed as an argument for writing the whole thing in C is just wrong. I haven't heard any good arguments as to why a library such as OpenSSL shouldn't be written in Haskell (or say 98% Haskell and 2% C).
Did someone at some point say
"There is 2% of the code that is performance critical and/or needs low-level code for cryptographic reasons so I'll write everything including the network code, command line argument parser, world, dog and kitchen sink in C" ?
> This isn't a fundamental dilemma like the consistency/scalability dilemma of databases. This is (or was) just a limitation of languages and compilers.
You're right. Null pointers are a nuisance in some languages, but other languages have shown that you can remove them and still have just as much of an expressive language (and the compiler can still translate pointers that might be "null" to actual null enabled pointers, so no performance cost). Rust might show that a stronger type system can remove certain raw pointer flaws from the language while still retaining both execution speed and programmer productivity.
Dependent types might mature to the point that you can use them and gain both execution speed, productivity and safety - time will tell.
> Using the execution speed as an argument for writing the >whole thing in C is just wrong. I haven't heard any good >arguments as to why a library such as OpenSSL shouldn't be >written in Haskell (or say 98% Haskell and 2% C).
Did someone at some point say
"There is 2% of the code that is performance critical and/or needs low-level code for cryptographic reasons so I'll write everything including the network code, command line argument parser, world, dog and kitchen sink in C" ?
...
The article makes the argument that, assuming that the whole program needs to be incredibly performant, you can write say 2% of it in verified ATS code, the rest in C-ish ATS code (ie. without proofs).
I guess you can also choose to write 2% verified low-level code, and the rest in a more high level ATS - ATS is a functional language with garbage collection and I presume other high level goodies that functional programmers are used to.
I think that C++ is so useful in practice because it gives you most of each of those, rather than just some of two (or even just one) of them.
Programs written in C++ aren't necessarily the fastest out there, but they're usually pretty close. They're at least almost always better than what you'd get when using most other languages.
And the same goes for safety. It may not allow you to write bulletproof code, but using modern C++ techniques can go a very long way toward avoiding many common problems with relative ease.
C++ may not be the most productive language for some developers, but it still does quite a good job of offering a wide variety of functionality, reasonably high-level constructs, good library support, and decent tooling.
C++ has just as much capacity to be unsafe as C, precisely because it accepts nearly all C code, not to mention that it has bare pointers, null references, and any number of other unsafe objects as first-class citizens. Of course, you're unlikely to use most of those if you're following best practices, but the same can be said of C. You might argue that C++ makes it easier to be safe because of its plethora of features and classes, but the massive size of C++ makes it a very hard language to master, and nearly impossible to guarantee that everyone will be conforming to best practices. C++ is quite possibly the largest language out there in terms of features, and with about the most gotchas (things that don't work the way you'd intuitively expect them to) that it's possible to put in a language. It's a step above C in terms of safety, but it is hardly a safe language, and it's only slightly more productive, because the benefits of its class system and standard library are so much at odds with it's huge mental overhead.
Well it would be nice if google could check the url it was visiting and if there is any sqli in it to not send the request (though this could potentially slow their crawling...)
How is Google supposed to check for what is/isn't "sqli"? The proposal reminds me of Yahoo! Mail's old "medireview" problem, where it filtered emails containing the string "eval":
Why? Do you want anybody but Google to hack your site? Why would Google spend resources on unnecessary detection of SQL injection(which probably will not be perfect anyway and may break legit requests) when anybody can hack your website?
I just can't justify why do you expect Google to spend resources on not running bogus HTTP GET request when anybody can run those? What is different about being hacked by Google bot and being hacked by an unsuspected user who clicks on a bogus link that was put on the same page where Google found that link to your server? Just doesn't makes sense.
Not only that, but it seems to me that it'd be a more efficient use of resources to spend the time hardening your own site rather than lobbying Google to implement something that only mitigates one potential attack vector. Even then, it just seems stupid because I'm sure there are valid GET query strings that might have select, insert, update, delete, or some permutation thereof in them.
It seems to me that it's just a punt on poor programming habits...
But the only point in this is to take down a site you won't be able to get into useful information back from this request as the request's response will be heading back to the spoofed address... (though if you are using for ddos it is pointless to get the data back...)
You can't make an HTTP request with a spoofed IP without being in position on the network to do a MiTM. The TCP handshake won't complete, so you can't send the HTTP request.