They are related but fundamentally different. It is a vital semantic difference (influencing the programming model itself) since destructors (C++ style) are synchronous and deterministic while finalizers (Java style) are asynchronous and non-deterministic.
I grasp the entirety of why people differentiate "finalizers" from "destructors", but in my opinion, the practical differences in their application are not the result of the concept itself being fundamentally different, it's a result of object lifetimes being different between GC'd and non-GC'd languages. In my opinion, the concept itself is pretty close to identical. You want to clean up resources at the end of the lifetime of an object. And yes, it's practically a mess because the object lifetime ends at a non-deterministic point in the future and usually not even necessarily on the same thread. Being a big fan of Go and having had to occasionally make use of finalizers for lack of a better option in some limited scenarios, I really genuinely do grasp this, but I dispute that it has anything to do with whether or not a language has try...finally, anymore than it has anything to do with a language having any other convenient structured control flow measures, like pattern matching or else blocks on for loops.
(I do also realize that finalizer behavior in some languages is weird, for performance reasons and sometimes just legacy reasons. Go is one such language.)
But I think we've both hit a level of digression that wouldn't be helpful even if we were disagreeing about the facts (which I don't really think we are. I think this is entirely about frames of reference rather than a material dispute over the facts.) Forgetting whether finalizers are truly a form of destructor or not, the point I was trying to make really was that I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`. You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes. (Though I ultimately still prefer errors being passed around as value types, like with std::expected, rather than exception handling blocks.)
I believe the reason why we don't have languages (that I can think of) that demonstrate this exact combination is specifically because try/catch exception blocks fell out of favor at the same time that new compiled/"low-level" programming languages started picking up steam. A lot of new programming language designs that do use explicit lifetimes (Zig, Rust, etc.) simply don't have try...catch style exception blocks in the first place, if they even have anything that resemble exceptions. Even a lot of new garbage collected languages don't use try...catch exceptions, like of course Go.
Now honestly I could've made a better attempt at conveying my position earlier in this thread, but I'm gonna be honest, once I realized I struck a nerve with some people I became pretty unmotivated to bother, sometimes I'm just not in the mood to try to win over the crowd and would rather just let them bury me, at least until the thread died down a bit.
> not the result of the concept itself being fundamentally different, it's a result of object lifetimes being different between GC'd and non-GC'd languages.
This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.
The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
> I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`.
Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
> You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes.
Exception handling is always tricky to implement/use in any language since there are multiple models (i.e. Termination vs. Resumption) and a language designer is often constrained in his choice. Wikipedia has a very nice explanation - https://en.wikipedia.org/wiki/Exception_handling_(programmin... In particular, see the Eiffel contract approach mentioned in it and then the detailed rationale in Bertrand Meyer's OOSC2 book - https://bertrandmeyer.com/OOSC2/
> This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.
> The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
Hahaha. It is certainly not a fundamental misunderstanding.
All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.
> Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use. But can you emulate `finally`? Again, no. FTA:
> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost. Update: Adam Rosenfield points out that Python 3.2 now saves the original exception as the context of the new exception, but it is still the new exception that is thrown.
> In C++, an exception thrown from a destructor triggers automatic program termination if the destructor is running due to an exception.
C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, and have spent a lot of my time on -fno-exceptions (among many other reasons.)
> The article already points out the main issues (in both non-GC/GC languages) here but it is actually much more nuanced. While it is advised not to throw exceptions from a dtor C++ does give you std::uncaught_exceptions() which one can use for those special times when you must handle/throw exceptions in a dtor. More details at ...
Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter. You can typically do that in `finally`.
When Java introduced `finally` (I do not know if Java was the first language to have it, though it certainly must have been early) it was intended for just resource cleanup, and indeed, I imagine most uses of finally ever were just for closing files, one of the types of resources that you would want to be scoped like that.
However, in my experience the utility of `finally` has actually increased over time. Nowadays there's all kinds of random things you might want to do regardless of whether an exception is thrown. It's usually in the weeds a bit, like adjusting internal state to maintain consistency, but other times it is just handy to throw a log statement or something like that somewhere. Rather than break out a scope guard for these things, most of the time when I see this need arise in a C++ program, instead the logic is just duplicated both at the end of the `try` and `catch` blocks. I bet if I search long enough, I could find it in the wild on GitHub search.
> All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.
You are still looking at it backwards. C++ chose to tie user-defined object lifetimes to lexical scopes (for automatic storage objects defined in that scope) via stack-based creation/deletion because it was built on C's abstract machine model. Thus the implicit function calls to ctor/dtor were necessitated which turned out to be a far more general mechanism usable for scope-based control via function calls.
But the lifetime of a user-defined object allocated on the heap is not limited to lexical scope and hence the connection between lexical scope and object lifetime does not exist. However the ctor/dtor are now synchronous with calls to new/delete.
So you have two things viz. lexical scope and object lifetime and they can be connected or not. This is why i insist on disambiguating both in one's mental model.
Java chose the heap-based object lifetime model for all user-defined types and thus there is no connection between lexical scope and object lifetimes. It is because of this that Java had to provide the finally block to provide some sort of lexical scope control even-though it is GC-based. The Java object model is also the reason that finalize in Java is fundamentally different to dtor in C++ which i had pointed out earlier.
> You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use.
We started this discussion with your claim that dtors and finalize are essentially the same which i have refuted comprehensively.
Now you want to discuss finally and its behaviour w.r.t exception handling. In the absence of exceptions RAII gives you all of finally-like behaviour.
In the presence of exceptions;
> C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, ... Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter.
This is again a misunderstanding. I had already pointed you to Termination vs. Resumption exception handling models with a particular emphasis on meyer's contract-based approach to their usage. Now read Andrei Alexandrescu's classic old article Change the Way You Write Exception-Safe Code — Forever - https://erdani.org/publications/cuj-12-2000.php.html
Both C++ and Java use the Termination model but because the object model of C++ vs. Java is so very different (C++ has two types of object lifetimes viz. lexical scope for automatic and program scope for heap-based with no GC while Java only has program scope for heap-based reclaimed by GC) their implementation is necessarily different.
C++ does provide std::nested_exception and related api (https://en.cppreference.com/w/cpp/error/nested_exception.htm...) to handle chaining/handling of exceptions in any function. However the ctor/dtor are special functions because of the behaviour of the object model detailed above. Thus the decision was made to not allow a dtor to throw while an uncaught exception is in flight. Note that this does not mean a dtor cannot throw (though it has been made implicit noexcept from C++11) but only that the programmer needs to take care when to throw or not. An uncaught exception means there has been a violation of contract and hence the system is in a undefined state; and hence there is no point in proceeding further.
This where the std::uncaught_exceptions comes in which the stack overflow article i linked to earlier quotes Herb Sutter;
A type that wants to know whether its destructor is being run to unwind this object can query uncaught_exceptions in its constructor and store the result, then query uncaught_exceptions again in its destructor; if the result is different, then this destructor is being invoked as part of stack unwinding
due to a new exception that was thrown later than the object’s construction.
Now the dtor can catch the uncaught exception and do proper logging/processing before exiting cleanly.
Finally, note also that Java itself has introduced new constructs like try-with-resources which should be used instead of try-finally for resources etc.
It is because of all the problems that the finalize method was deprecated in Java 9 and marked "deprecated for removal"(JEP 421) in Java 18. More details at https://stackoverflow.com/questions/56139760/why-is-the-fina... and https://inside.java/2022/01/12/podcast-021/
PS: JEP 421: Deprecate Finalization for Removal - https://openjdk.org/jeps/421 Also details alternative features/techniques to use.