"we model a scenario where the original code is memory-safe; the ported code is memory-safe; and we consider memory safety and undefined behavior that may arise across the FFI layer between the two pieces of code."
I may be stating the obvious, but that's a bit of a strawman. Yes, writing good FFI code is hard; yes it could result in security/soundness issues; yes, we could use better tools in this space.
But nobody rewrites C code in Rust if they believe existing codebase is free of memory safety hazards; they rewrite it because they think the result will contain fewer hazards, even accounting for the potential problems at the FFI boundary.
If I could remove tens of thousands of lines of hard-to-analyze C code, and replace it with tens of thousands of lines of safe Rust, paired with a few hundred lines of hard-to-analyze FFI adapters, that sounds like a pretty good tradeoff to me. I now know exactly where to focus my attention, and I can have confidence that the situation will only improve with time: better tooling may allow me to improve the dangerous FFI layer, and in the meantime I can recklessly improve the safe Rust module without fear of introducing new memory unsafety bugs, unsound behavior, or data races.
Exactly. No one is saying "rewrite it in Rust because it already works". They typically say it because the thing in question is a bug farm, or it's really difficult to maintain.
I looked for the author's (whoever they are) proposed solution and it's this:
"This is because many of the FFI bugs surveyed are fundamentally cross-language issues. Instead, we propose that both C and Rust must conform to a shared,
formally-based domain-specific language, which provides a safe-by-construction abstraction over the FFI boundary."
Such a thing is a "new thing", and isn't going to retroactively apply to the legacy C code written before this new thing... so how does that help?
(Full disclosure: I'm a professional programmer who has written in C, C++, C# for many years, and now I choose to write new things in Rust.)
> Exactly. No one is saying "rewrite it in Rust because it already works". They typically say it because the thing in question is a bug farm, or it's really difficult to maintain.
Well some people are. Is there a word for attacking the weakest real version of an opposing argument? Strawman usually implies you are attacking a fake version of the argument, but on the internet you can usually find someone who actually holds an easily refuted point of view, just because of the law of large numbers...
> nobody rewrites C code in Rust if they believe existing codebase is free of memory safety hazards
I'd offer that even if the existing codebase is free of memory safety hazards, low confidence in future changes being able to keep it free of memory safety hazards in a cost efficient manner is a motivation to migrate.
The idea that a program can approach some optimal bug-free state, never to be modified or refactored again, doesn't resemble any project I've ever encountered.
> But nobody rewrites C code in Rust if they believe existing codebase is free of memory safety hazards; they rewrite it because they think the result will contain fewer hazards
What if someone makes a strawman out of memory problems so they can rewrite it in Rust ?
> But nobody rewrites C code in Rust if they believe existing codebase is free of memory safety hazards; they rewrite it because they think the result will contain fewer hazards, even accounting for the potential problems at the FFI boundary.
That's pretty generous. They re-write it in rust because a "Show HN: Thing-X, in Rust gets upvotes."
agree, i wanted to just add that this paper might be right for projects that are not actively developed anymore something like bash or coreutils etc, as there this is fairly well tested code and there aren't that many added features that could introduce issues.
for anything that is actively developed it's a whole other story, even if you are confident that the current codebase is safe, each added feature has a risk that it breaks some unwritten contract somewhere and introduces security issues.
eg. look at recent vulnerability in sudo, at and second sight it was safe and secure, triggering it required unobvious corner case.
how many of similar issues you could have in your codebase that could be dormant for years?
> Bash looks like didn't has any commit this year yet
Does anyone know why this is? Is it because doing anything would cause POSIX divergence? Nobody wants to (because it's an "ugh experience")? It's considered effectively complete?
Genuinely curious.
I don't actually use Bash for my personal shell, thought. I do write bash scripts semi-regularly though.
I don't think this is true in the general case. Most Solokeys come in a "locked" form-- they will only accept firmware updates that are signed by the manufacturer. You can buy a "hacker" variant that is unlocked (meant for those that want to tinker with the firmware), but if you were to use one of those you're giving up security against someone loading malicious firmware onto your device.
This is probably the right tradeoff for most users. Solokeys has done a great job of providing continuous support for all of their products, and their software stack has been open source since the beginning. That (combined with the low price) makes them my first choice for a hardware security token.
> You can buy a "hacker" variant that is unlocked (meant for those that want to tinker with the firmware), but if you were to use one of those you're giving up security against someone loading malicious firmware onto your device.
For those that enjoy stories about the early days of electric guitar, I recommend picking up the book "The Birth of Loud: Leo Fender, Les Paul, and the Guitar-Pioneering Rivalry That Shaped Rock 'n' Roll" by Ian S. Port. It's a great story about the first electric guitars and their creators.
Note that the feature set on this is limited. Specifically it is missing GPG (and thus SSH) support. It may be added via a software update if the somu campaign goes very well. But frankly that is a long shot.
Though I got the impression that it was a deliberate backdoor to allow security services to operate incognito, rather than an AI-fooling hack. Maybe just because the book was written before modern neural nets became so widespread.
A friend used to design systems for a major CCTV company. I asked him to add a similar backdoor for me, but sadly he never did...
It's interesting that their ssh-agent runs as a service under an Administrator account. I'd guess this is an attempt to better protect the private key against theft during a local compromise (i.e. unlocked computer left running on your desk).
I haven't seen this done on Linux. Has this trick been implemented on other systems?
regarding "development job... putting me outside the income level":
I faced the same situation; after being rejected, all I had to do was file some extra paperwork ("special circumstances") saying that I'd left my job and no longer had any income.
I may be stating the obvious, but that's a bit of a strawman. Yes, writing good FFI code is hard; yes it could result in security/soundness issues; yes, we could use better tools in this space.
But nobody rewrites C code in Rust if they believe existing codebase is free of memory safety hazards; they rewrite it because they think the result will contain fewer hazards, even accounting for the potential problems at the FFI boundary.
If I could remove tens of thousands of lines of hard-to-analyze C code, and replace it with tens of thousands of lines of safe Rust, paired with a few hundred lines of hard-to-analyze FFI adapters, that sounds like a pretty good tradeoff to me. I now know exactly where to focus my attention, and I can have confidence that the situation will only improve with time: better tooling may allow me to improve the dangerous FFI layer, and in the meantime I can recklessly improve the safe Rust module without fear of introducing new memory unsafety bugs, unsound behavior, or data races.