Hacker Newsnew | past | comments | ask | show | jobs | submit | sinhpham's commentslogin

I find myself increasingly doing this in Rust:

    let lines = {
        let ret = vec![];
        // Open file
        // Read into ret
        ret
    };
    let processed_data = {
        // Open another file
        // Use lines
        // Construct ret
        ret
    };
    ....

You get the best of both worlds this way: scoped, meaningful names like with sub-functions and the continuity of a long one.

Everything is an expression is an underrated feature of Rust.


In at least Java, and probably more languages, you can also use braces to limit variable scope. E.g.:

  List<String> lines;
  {
    int whatever;
    lines = new ArrayList<>();
    // modify lines
  }

  List<Object> processed_data;
  {
    int whatever;
    processed_data = new ArrayList<>();
    // modify processed_data, using lines
  }


In Go something similar to this is actually more common because of `defer`, e.g.:

    if err := func() error {
      f, err := os.Open(filename)
      if err != nil {
        return err
      }
      defer f.Close()
      // work with f
      return nil
    }(); err != nil {
      return err
    }


I do this a lot in unit tests where a pattern is repeated.


No. Software evolve and there's value in knowing your assumptions still hold after the change/refactor.


A unit test is the worst way to capture that, because you've removed all the parts that evolve.


To echo the original comment, I already know half my assumptions don't hold anymore, and I don't care about them. Unit tests are just more code to maintain. It's like vastly expanding the requirements, unnecessarily.


I don't understand that but thank you for your opinion.


I've been using a program I wrote for 18 years now. The code today is not the same as the code 18 years back. I have slowly modified the code as ideas for new features come up, old features removed because they aren't used anymore, and foundational code rewritten as I found better implementations (or the previous implementation was a mistake [1][2]. Making a sweeping change (like re-implementing some core code) is dangerous because of the threat of new bugs, which is why tests are important (although I tent to prefer integration tests over unit tests).

[1] I used to have an error logging mechanism in place that would record where in the code the error happened (in addition to other information) and how it propagated up through the code.

While it wasn't that hard to use, per se, it was bothersome when I had to add an error (each error had a unique ID and because of language issues and tooling, that was a manual process). And it really never paid back its implementation cost in terms of reporting, so I finally ripped it out.

[2] I had my own version of C's FILE* [3] that ended up being a horrible abstraction---it was so confusing that I could never remember how to use it, and I wrote the code. I got fed up with it, and ripped that out, replacing it with native IO calls.

[3] For stupid reasons now that I think back on it.


The simple way to say it is that once your code gets too complicated to keep all the execution paths in your head (sooner than you think), tests are what enable you to make sure you didn't forget an assumption you made about how it should work.

Do you like going back to manually verify things still worked after you make a foundational change? Or do you just trust yourself to know you've not impacted anything negatively? Do you honestly believe that's good use of your brain power even if true? Do you enjoy being married to that code? Because that's exactly what happens as nobody but you can work on it. That strategy works out well for employees to become key men and get big retention bonuses so there is some merit to your approach :-)

Not to mention the team impact to cowboy coding. It's a selfish approach to software development and is often undertaken by bully developers that say things like "use the source" as a way to make you feel dumb for not wanting to spend all day trying to untangle their likely insane code. All because they were too lazy to use a little empathy and be a good teammate.

To recap: Tests document and improve your design, verify functionality, provide leverage to accelerate development, and reduce "bus factor" risk. I don't care what a small sample data set says to the contrary there's really little debate that when wielded properly these tools and techniques do improve software on multiple dimensions that go beyond the software itself and heavily impact the business.

20+ years of product development experience in lots of teams/products inform this opinion. Let's see the data on how your business grinds to a halt when these tools aren't used and the code base gets increasingly larger and complex and people leave. Tell me about how you don't need those automated tests as you approach technical debt bankruptcy and all your key people have left. "Use the source" is a terrible response to that problem.

Rewrites can be fun though so maybe that's the answer. It sure worked out well for Netscape. ;-)


Unit tests are a poor way to validate code behavior. 30+ years of development has taught me that validation tests within the code itself are far better.

1) They run whenever the product runs. Good logging code tells you when it fails, even on customers devices.

2) By being part of the production code, they are far easier to keep up to date as the code changes even through refactoring.

3) They don’t force you to change anything about how you code.


Agreed. I like to test the validity of a function's input and throw an exception if it fails. It catches most regressions with a nice error message.


Not sure I understand. Can you give an example?


Why not just say that unit tests are a management methodology for mixed-skill teams? There's no need to invoke 'bully devs' just because inexperienced or un-conscientious developers exist.


Being proud of your lack of understanding is quite sad.


"C calls into you" means you already have a C code base and now need to call your C# functions.


Right, C# is OK when C code calls into it.

https://stackoverflow.com/a/5235549/126995


One neat trick with .NET on Windows, is that you can actually export static methods in assemblies as unmanaged entry points. In other words, things can LoadLibrary/GetProcAddress them, and invoke them as native.

C# doesn't support this out of the box, but it can be easily done by post-processing the generated assembly. There's a NuGet package for that.

https://www.nuget.org/packages/UnmanagedExports

I'm not sure if any of that works on other platforms, or with .NET Core. Probably not.


I once tried to use that trick, for nVidia optimus integration.

Didn’t work because that recompilation step broke debugger, and invalidated .PDB debug symbols.


To be clear, I’m not saying it’s not possible. I’m saying that one less runtime is an advantage.


Could you please add an example of using cargo instead of calling rustc directly? How do I configure cargo.toml?


Here's hello world: https://github.com/steveklabnik/semver.crates.io/commit/dc3b...

(I wanted to try to port semver parsing to the web today, but I hit an LLVM assertion, so it'll have to wait until we can fix that issue. This stuff is still very raw!)


What happened to impl Trait? I was quite excited to see it under 1.21 in the milestone predictions thread, but it has since been moved to "horizon".

https://internals.rust-lang.org/t/rust-release-milestone-pre...


To explain more besides Steve's commit, there has been another RFC that significantly expands the scope of impl Trait. It was accepted, but it also had the effect of punting the stabilisation of the feature a bit until things settle down. I don't know if the "conservative" version of impl Trait is going to land first anyway, but I find it just reassuring that people aren't rushing things.

(For reference: The original impl Trait: https://github.com/rust-lang/rfcs/pull/1522 The refinement, readying it for stabilisation: https://github.com/rust-lang/rfcs/pull/1951 The latest one, which gives it more expressive power: https://github.com/rust-lang/rfcs/pull/2071 )


The milestone prediction thread is just that; a prediction. It's still very much coming, just not yet. It won't be in 1.22 either, as that went into beta today.

https://github.com/rust-lang/rust/issues/44721 is the issue you want to track. There's been a lot of chatter in recent days.


They implemented what looks like the Rust ownership model: SE-0176 Enforce Exclusive Access to Memory (https://github.com/apple/swift-evolution/blob/master/proposa...), but I'm having a hard time understanding the proposal, can anyone shed some light on this?


You can think of inout parameters in Swift as something analogous to a mutable borrow in Rust. Until Swift 4 we allowed overlapping inout access, for example:

    var counter = 0
    func foo(x: inout Int) {
      x += 1
      print(counter)
    }
    foo(x: &counter)
Note how 'counter' is read by 'foo(x:)' during an inout access of the same value. This is now prohibited by Swift 4, using a combination of static and dynamic checks.

This fixes some undefined behavior and will also enable more aggressive compiler optimizations to be added in the future.


Isn't the mutable borrow ended after the end of the x += 1 line? leaving you free to read the contents of the ptr?

I don't know swift at all. but from their document on swapAt() it looks like they are trying to prevent two fn(&p, &p) where func fn(a: inout Type, b: inout Type)


> Isn't the mutable borrow ended after the end of the x += 1 line?

No, see Mike's comment here.


> Note how 'counter' is read by 'foo(x:)' during an inout access of the same value

It's not clear to me in your example why reading the value of counter after mutating it is bad; why is this now prohibited?


Swift specifies inout parameters as copying the value that's passed in, giving the copy to the callee as a mutable value, and then writing back the value to the original storage after the function returns. & is not an "address of" operator.

Of course, it would be inefficient to do this all the time, so Swift will optimize copy-then-writeback to just passing a pointer to the original storage whenever it can. But this is an optimization operating under the "as if" rule: as long as it works as if it does a copy-then-writeback, the compiler can make it actually do whatever it wants.

If the example code were legal, then it would have to print `0`, because the writeback to `counter` doesn't happen until the end of the function. That means the compiler couldn't just pass in a pointer to `counter`, but would have to actually go through the copy-then-writeback procedure it's supposed to do, so you'd lose out on optimizations.

Instead, Swift makes it illegal. You can't access a value while this call is happening. That allows the language semantics to coexist with optimizations.


what does & do in Swift then? And what's the purpose of even having inout and & if you can't count on the code to actually be an address?


& is just a sigil saying "I acknowledge that I am passing this as an inout parameter, and therefore the value may be modified by the function I am calling." Note that & is only legal on a function parameter. You cannot write, for example, `let b = &a`.

The purpose is to allow for out-parameters. A classic example would be the `+=` operator. (Swift operators are just normal functions with special call syntax.) It takes its first parameter as `inout` so that it can mutate the value.

Note that inout parameters work with expressions where it would be impossible to take the address. For example, you can use & on a computed property that has a setter. In that case it has to read the initial value, pass that to the function, then write back the new value, because it has no idea where the computed property actually stores the value, if anywhere.

Edit: because I'm obsessive and weird, I made a quick example of this computed property stuff:

http://swift.sandbox.bluemix.net/#/repl/59c284376cbea87f72c4...

Click the play triangle at the bottom to see the output.


> (Swift operators are just normal functions with special call syntax.)

Coming from Haskell and Rust it's nice to see this trend catching on.

Is Swift planning to introduce a distinction between borrows and mutable borrows to the user? From what you describe it seems like right now syntax-wise a borrow and a mutable borrow look the same, and the runtime makes some decision about it.

edit: Or I guess it could be the opposite. Since Swift passes by value always unless the runtime can optimize (right?), you could just not write & and inout and cross your fingers it gets optimized to a borrow rather than a copy?

Stuff like this makes me prefer the explicitness of Rust. It seems like here on the surface it's abstracted from you, but really you need to know the rules anyway or you could get into trouble.


> Coming from Haskell and Rust it's nice to see this trend catching on.

It was already like that in older languages like Lisp, Smalltalk and CLU, just C++ made them in a different way.


What's the use of a non-mutable borrow? Is it just for speed, to avoid copying a large structure? Large structures are rare in Swift and probably not important to optimize. (Value types like String and Array are actually just one reference under the hood, and that's all that gets "copied" when you pass those by value.)


It also lets one avoid reference-counting traffic, which can be significant for values that contain multiple ARC/COW things (such as strings and arrays), and is semantically critical with move-only (or "unique ownership", per the ownership manifesto[1]) types.

[1]: https://github.com/apple/swift/blob/master/docs/OwnershipMan...


Unique ownership would be really nifty, and it makes sense that you'd definitely need pass by reference for it. Thanks for explaining!


I don't know Swift, but non-mutable borrows are useful in a large number of languages. Are you sure a String is represented as a single reference? In Rust a String has a reference to the actual contents on the heap, a length, and a capacity. So that's a bit heavier than just a single reference, same thing with Vec. Rust's semantics are similar in that if you pass a Vec it will 'move' those 3 things, still, passing a string or vector slice (an immutable borrow) is faster, and moves less data, because it only copies a single reference.

I find it hard to believe that 'large structures are rare in Swift'. People don't make structs with multiple fields? You never want other structs to hold one or many of those? These are cases where non mutable borrows are useful. I understand that in Swift this is probably abstracted away from you and done by the runtime (or compiler) if it can, but that doesn't mean non-mutable borrows aren't useful; you just probably don't see them.

Of course, that's just a guess, I don't know Swift.


You're right, String has three fields. It's Array that's just a single reference. Is the overhead of passing three fields as a parameter so high that passing a reference is faster? Pointer chasing isn't free either, after all. I can certainly imagine scenarios where that sort of microoptimization pays off, but it seems like it would be rare.

As far as large structures go, I'm thinking "large" like hundreds of fields. Any time I've seen people concerned about large structures, they misunderstand the value-typedness of things like String and Array and are worried about the contents of those things, which isn't really part of the size of the struct itself. But that's just what I've seen.


From the documentation of Array for Swift it has the same behaviour as Rust's Vec (it's growable, it allocates double the capacity after reaching max length). I'd find it pretty odd if it didn't share the same 3 word length as String. Also, how would it quickly know it's length if it didn't also store it's length in the same structure as the ptr to it's heap location? You'd have to chase 2 pointers just to get the length.

I'd guess that Swift also has something analogous to an array slice, which would be a (possibly) immutable borrow to a chunk of array data on the heap. This also happens to be a good use case for borrows.

From the other comment here it seems like Swift is pursuing an ownership model similar to Rusts, in which case, immutable borrows will become more important when you think about struct contents. You can only have a single owner, but you can specify many borrowers. This kind of thing is important when you have an array or vector of types, often you don't want those types to have a single owner but you want them to be populated or store a reference from somewhere else.

Anyway, don't dismiss the concept out of hand. Immutable borrows definitely have their uses, whether it's made explicit to you or not in Swift is another thing entirely.


The array capacity and length are stored inline before the contents. So you chase one pointer to get the capacity, length, or something stored in the array.

Swift does have ArraySlice, but I don't get how borrows factor into that. Seems vaguely similar in concept, except ArraySlice exists to represent a subset of the original array, not just so you can pass arrays by reference. Since arrays are already passed by reference under the hood, that wouldn't really be useful.

I'm not dismissing the concept out of hand, so I'm not sure why you're warning me about that....


Still new to Swift but I believe & is an explicit syntax necessary to make clear in the code that the function being called is mutating the argument, and thus the variable passed into that function could be mutated. It must be used wherever an argument is in/out. It makes mutation explicit both in the function declaration and also in the function call. Which is nice!

It's good for code readability but also prevents accidentally passing a variable to a function that could mutate it when you weren't expecting that, and vice-versa.


Values are not guaranteed to even have an address, IIRC.


Thanks Mike, very clear.


Note: I like to read about Rust but don't work with it seriously, and don't follow Swift at all. Corrections welcome. That said, these seemed like the key passages:

"Swift has always considered read/write and write/write races on the same variable to be undefined behavior. It is the programmer's responsibility to avoid such races in their code by using appropriate thread-safe programming techniques."

"The assumptions we want to make about value types depend on having unique access to the variable holding the value; there's no way to make a similar assumption about reference types without knowing that we have a unique reference to the object, which would radically change the programming model of classes and make them unacceptable for the concurrent patterns described above."

Sounds like a system in the vein of rust but more limited, with more runtime checks and no lifetime parameters, falling back to "programmer's responsibility" when things get hard. The last paragraph makes it sound like one of the motivations is in enabling specific categories of optimizations, as opposed to eliminating races at the language level.

One of my biggest questions as a reader is how a language like C handles these cases that Swift can't handle without these guarantees. Is this a move to get faster-than-C performance? Does C do these optimizations unsafely? Is there some other characteristic of Swift that makes this harder than C? Closures get a lot of focus in the article...


The other responses to your comment are correct: C generally can't do those optimizations, unless you manually write `restrict`, and this can hinder optimization. But to complete the picture -

> Is there some other characteristic of Swift that makes this harder than C?

Yes:

1. Swift doesn't have pointers.

Instead, you have a lot of copying of value types, and the compiler has to do its best to elide those copies where it can. For instance, at one point the document mentions:

> For example, the Array type has an optimization in its subscript operator which allows callers to directly access the storage of array elements.

In C, C++, or Rust, you can "directly access the storage" without relying on any optimizations: just write &array[i] and you get a pointer to it. The downsides are (a) more complicated semantics and (b) the problem of what happens if array is deallocated/resized while you have a pointer to it. In C and C++, this results in memory unsafety; in Rust, the borrow checker statically rules it out at the cost of somewhat cumbersome restrictions on code.

2. Swift guarantees memory safety; C and C++ don't.

This goes beyond pointers. For instance, some of the examples in the document talk about potentially unsafe behavior if a collection is mutated while it's being iterated over. In Swift, the implementation has to watch out for this case and behave correctly in spite of it. In C++, if you, say, append to a std::vector while holding an iterator to it, further use of the iterator is specified as undefined behavior; the implementation can just assume you won't do that, and woe to you if you do. (In Rust, see above about the borrow checker. Iterator invalidation is in fact one of the most common examples Rust evangelists use to demonstrate that C++ is unsafe, even when using 'modern C++' style.)


> 1. Swift doesn't have pointers.

Sure it does:

    func modify(_ x:UnsafeMutablePointer<Int>) {
        x.pointee = 12;
    }
     
    func main()
    {
        var x = 23;
        print("Before \(x)\n");
        modify(&x);
        print("After \(x)\n");
    }



> Swift guarantees memory safety

But memory leaks are quite easy to do with reference counting which is why Swift has some more complex syntax to prevent strong references. But it can take skill to understand when to use those techniques; the compiler doesn't always find these problems, thus there really isn't the guarantee you mentioned.


Memory leaks are not the same as memory safety problems.


Those Rust evangelists are only partially correct. It's the STL that's unsafe, not the language itself.

Iterators could be implemented in C++ in a safer way with some performance loss, but it doesn't seem to be a priority for anyone except the safercpp guy that posts here every now and then. STLs can enable iterator validation in a special debug mode.


That is incorrect.

The language includes memory unsafe constructs without marking them in any way, since it must be compatible with C.


We're discussing iterators here, and the STL iterators implemented as class templates can't even be compatible with C.

To clarify: safe containers, iterators and algorithms can be designed, but they don't seem to be a priority of the C++ community. Personally I'm quite scared of accidentally passing the wrong iterator to some function, but OTOH I can't recall it ever happening. I don't use the debug STL either, haven't needed it.

The examples that pcwalton keeps bringing up seem artificial to me. It's true that you can't have perfect safety in C++, but with some effort and custom libraries, many errors can be caught at compile or run-time. The advantage of Rust is that it's safe by default, not necessarily that there's a major safety difference between quality C++ and quality Rust.


You’ve probably heard this before, but the security angle is important. “I haven’t had these problems in my code” really means “I haven’t triggered these problems in my code”… that is, unless you’ve had a security code audit done. Testing isn’t enough: even well-tested codebases can and do have vulnerabilities. In practice, they’re usually triggered by input that’s so nonsensical or insane from a semantic perspective, not only would it never happen in practice in ‘legitimate’ use, the code author doesn’t even think to test it. For a simple example, if some binary data has a count field that’s usually 1 or 2 or 10, what happens if someone passes 0x40000000 or -1? As a security researcher myself, I think it‘s actually easier to audit code with less knowledge of how the design is supposed to work, up to a point, because it leaves my mind more open. Rather than making assumptions about how different pieces are supposed to fit together, I have to look it up, and as part of looking it up I might find that the author’s assumptions were subtly wrong… For this reason, it’s really hard to audit your own code, at least in my experience. I mean, you can definitely keep reviewing it, building more and more assurance that it’s correct, but if your codebase is large enough, there may well be ‘that one thing’ you just never thought of.

I’m not actually sure how frequent iterator invalidation is as a source of vulnerabilities; I don’t think I’ve ever found one of that type myself. However, use-after-frees in general (of which iterator invalidation is a special case) are very common, usually with raw pointers. In theory you can prevent many use-after-frees by eschewing raw pointers altogether in favor of shared_ptr, but nobody actually does that – that’s important, because there’s a big difference between something being theoretically possible in a language and it being done in practice. (After all, modern C++ recommendations generally prefer unique_ptr or nothing, not shared_ptr!). And even if you do that, you can’t make the `this` pointer anything but raw, and same for the implicit raw pointer behind accesses to captured-by-reference variables in lambdas.

You can definitely greatly reduce the prevalence of vulnerabilities with both best practices for memory handling and just general code quality (that helps a lot). But if you can actually do that well enough - at scale - to get to no “major safety difference”, well, I haven’t seen the evidence for it, in the form of large frequently-targeted codebases with ‘zero memory safety bugs’ records. Maybe it’s just that C++’s backwards compatibility encourages people to build on old codebases rather than start new ones. Maybe. It’s certainly part of the story. But for now, I’m pretty sure it’s not the whole story.


C and C++ don't have a culture of safety, they have one of performance.

C++ code could be written significantly safer with a performance loss, e.g: index checking at run-time, iterator validity checking, exclusive smart ptr usage with null checking, etc. That, together with code reviews, static & dynamic analysis should IMO lead to comparable safety. That's what I'd do.

However, there doesn't seem to be a rush in that direction. My guess is that there won't be a rush to switch to Rust either.

Is the security angle that important that it's handled through education and better tooling? Or only important enough to do some code audits and pen testing?


> STLs can enable iterator validation in a special debug mode.

And they do actually, in the MSVC debug mode and with libstdc++'s -D_GLIBCXX_DEBUG. But nobody ever wants to use them.


C compilers must assume that a function pointer (that is, a function passed as an argument, or that is a property of an object) may write to any global variable.

C compilers must also assume that any two pointers to the same type may alias (refer to the same object). The programmer can assert to the compiler that a pointer does not alias any others used in the same scope by declaring it with the `restrict` keyword.

For most functions this won't have much effect on the generated code. Writing equivalent functions to the ones in the swift-evolution doc in C, both with and without `restrict` everywhere possible, it looks like `restrict` only has an effect on the generated code for `increaseByGlobal`: https://godbolt.org/g/W8s3BA


> One of my biggest questions as a reader is how a language like C handles these cases that Swift can't handle without these guarantees.

C doesn't address these issues at all as far as I know.


This is the first step into adopting more Rust like memory safety, but not at the expense of productivity.

Basically Swift will keep using reference counting as its GC algorithm, but for high performance situations it will be possible to have a bit more of fine grained control over ownership.

However they want to avoid any design that might result in "fighting with borrow checker" feeling.

Some info from WWDC 2017,

https://developer.apple.com/videos/play/wwdc2017/402/

There is also a transcript.



Here's the 'Ownership Manifesto' that tries to clarify some of the differences between the ownership models of Rust and Swift [1]. The main point raised in that document is how 'shared values' are being implemented in Swift in a less strict way compared to Rust.

[1] https://github.com/apple/swift/blob/master/docs/OwnershipMan...


Your email is not visible in your profile


john . titus @ gmail


I dislike the rigidity of the original pomodoro technique, so I made a flexible one without the fixed time slots. You only have a minimum working duration, work for however long you like. I found that way it doesn't break my flow.

https://www.niftytools.online/flexpomodoro/


How many emails did you send to get your first paying customer? What was the respond rate of your emails?


I didn't keep track, but the response rate wasn't great.


Because you can then catch errors at compile time instead of run time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: