Hacker Newsnew | past | comments | ask | show | jobs | submit | purplesyringa's commentslogin

I was wondering why, in my Firefox, the image appears saturated when embedded on the website, but opening it in a new tab by a direct URL shows an unsaturated version. The `img` tag on the website seems to be styled with `mix-blend-mode: multiply`, which makes the image darker because the background is #f0f0f0.


You can: the equation x^2 = x holds for 1, but not for -1, so you can separate them. There is no way to write an equation without mentioning i (excluding cheating with Im, which again can't be defined without knowing i) that holds for i, but not -i.


You can try to write it in Rust, doesn't mean you'll succeed. Rust targets the abstract machine, i.e. the wonderful land of optimizing compilers, which can copy your data anywhere they want and optimize out any attempts to scramble the bytes. What we'd need for this in Rust would be an integration with LLVM, and likely a number of modifications to LLVM passes, so that temporarily moved data can be tracked and erased. The only reason Go can even begin to do this is they have their own compiler suite.


It's not clear to me how true your comment is. I think that if things were as unpredictable as you are saying, there would be insane memory leaks all over the place in Rust (let alone C++) that would be the fault of compilers as opposed to programs, which does not align with my understanding of the world.


"Memory leaks" would be a mischaracterisation. "Memory leak" typically refers to not freeing heap-allocated data, while I'm talking about data being copied to temporary locations, most commonly on the stack or in registers.

In a nutshell, if you have a function like

    fn secret_func() -> LargeType {
        /* do some secret calculations */
        LargeType::init_with_safe_Data()
    }
...then even if you sanitize heap allocations and whatnot, there is still a possibility that those "secret calculations" will use the space allocated for the return value as a temporary location, and then you'll have secret data leaked in that type's padding.

More realistically, I'm assuming you're aware that optimizing compilers often simplify `memset(p, 0, size); free(p);` to `free(p);`. A compiler frontend can use things like `memset_s` to force rewrites, but this will only affect the locals created by the frontend. It's entirely possible that the LLVM backend notices that the IR wants to erase some variable, and then decides to just copy the data to another location on the stack and work with that, say to utilize instruction-level parallelism.

I'm partially talking out of my ass here, I don't actually know if LLVM utilizes this. I'm sure it does for small types, but maybe not with aggregates? Either way, this is something that can break very easily as optimizing compilers improve, similarly to how cryptography library authors have found that their "constant-time" hacks are now optimized to conditional jumps.

Of course, this ignores the overall issue that Rust does not have a runtime. If you enter the secret mode, the stack frames of all nested invoked functions needs to be erased, but no information about the size of that stack is accessible. For all you know, memcpy might save some dangerous data to stack (say, spill the vector registers or something), but since it's implemented in libc and linked dynamically, there is simply no information available on the size of the stack allocation.

This is a long yap, but personally, I've found that trying to harden general-purpose languages simply doesn't work well enough. Hopefully everyone realizes by now that a borrow checker is a must if you want to prevent memory unsoundness issues in a low-level language; similarly, I believe an entirely novel concept is needed for cryptographical applications. I don't buy that you can just bolt it onto an existing language.


I'm pretty sure you could do it with inline assembly, which targets the actual machine.

You could definitely zero registers that way, and a allocator that zeros on drop should be easy. The only tricky thing would be zeroing the stack - how do you know how deep to go? I wonder what Go's solution to that is...


I meeeeean... plenty of functions allocate internally and don't let the user pass in an allocator. So it's not clear to me how to do this at least somewhat universally. You could try to integrate it into the global allocator, I suppose, but then how do you know which allocations to wipe? Should anything allocated in the secret mode be zeroed on free? Or should anything be zeroed if the deallocation happens while in secret mode? Or are both of these necessary conditions? It seems tricky to define rigidly.

And stack's the main problem, yeah. It's kind of the main reason why zeroing registers is not enough. That and inter-procedural optimizations.


So you’re correct that covering the broadest general case is problematic. You have to block code from doing IO of any form to be safe.

In general though getting to a fairly predictable place is possible and the typical case of key material shouldn’t have highly arbitrary stacks, if you do you’re losing (see io comment above).

https://docs.rs/zeroize/1.8.1/zeroize/ has been effective for some users, it’s helped black box tests searching for key material no longer find it. There are also some docs there on how to avoid common pitfalls and links to ongoing language level discussions on the remaining and more complex register level issues.


Yeah I meant a global allocator. You would wipe anything that was allocated while executing a "secure" function.


This feels like a misrepresentation of features that actually matter for memory safety. Automatically freeing locals and bounds checking is unquestionably good, but it's only the very beginning.

The real problems start when you need to manage memory lifetimes across the whole program, not locally. Can you return `UniquePtr` from a function? Can you store a copy of `SharedPtr` somewhere without accidentally forgetting to increment the refcount? Who is responsible for managing the lifetimes of elements in intrusive linked lists? How do you know whether a method consumes a pointer argument or stores a copy to it somewhere?

I appreciate trying to write safer software, but we've always told people `#define xfree(p) do { free(p); p = NULL; } while (0)` is a bad pattern, and this post really feels like more of the same thing.


> Can you return `UniquePtr` from a function?

Yes: you can return structures by value in C (and also pass them by value).

> Can you store a copy of `SharedPtr` somewhere without accidentally forgetting to increment the refcount?

No, this you can't do.


> we've always told people `#define xfree(p) do { free(p); p = NULL; } while (0)` is a bad pattern

Have we? Why?


Thank you :)


I meant polyfilled coroutines used by other JVM languages, like Kotlin. When you compile a coroutine to a state machine, yielding has to return from the machine; but JVM does not support unbalanced monitors, although it obviously does support unbalanced locking operations with normal mutexes.


No it shouldn't. The function you're talking about is typically called T(N), for "time". The problem is that you can't write T(N) = N^(1/3) because it's not exactly N^(1/3) -- for one thing, it's approximate up to a constant factor, and for another thing, it's only an upper bound. Big-O solves both of these issues: T(N) = O(N^(1/3)) means that the function T(N) grows at most as fast as N^(1/3) (i.e.: forms a relationship between the two functions T(N) and N^(1/3)). The "T(N) =" is often silent, since it's clear when we're talking about time, so at the end you just get O(N^(1/3)).


Signals.


That's why the old advice was not to use signals and threads together, if you can avoid it.


It's not about static contracts at all, it's about keeping performance of high-level APIs high. It's all just about templates and generics, as far as I'm aware -- the same problem that plagues C++, except that it's worse in Rust because it's more ergonomic to expose templates in public library APIs in Rust than C++. Well, and also the trait solver might be quite slow, but again, it has nothing to do with memory safety.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: