Hacker Newsnew | past | comments | ask | show | jobs | submit | teo_zero's commentslogin

Interesting concept! Any examples?

True, but most probably you wouldn't be legally allowed to redistribute your driver or software.

Since the article is about open-sourcing the interface and not the software itself, why only when the product goes end of life?

Also, "end of life" is hard to define. Does it mean not being produced, ordered or sold? After how many days, months, etc.?


I guess it depends? -Software updates no longer available -Customer support no longer exists -End of legal (or voluntary) obligations to stock replacement parts and offer repair services for the hardware

> So in my case, I shouldn't dump Windows (and thousand of $$ I've spent on audio software).

If you mean that there's no Linux replacement for Digital Audio Workstation, then I agree: switching is not for you. But if what worries you are the $$ you have spent, you are just another victim of the sunk cost fallacy. The earlier you realize your mistake, the earlier you are ready to evaluate the options without biases.


I wish to gently but firmly disagree.

The thousands of $$ on audio software are spent because they actually provide the exact output needed. Think of them as a precision-tooled machine part critical to the production flow in a factory. Sure, you can replace it with a cheaper alternative but the cheaper part will likely have rough edges and the errors caused by those rough edges will cascade into your final production output...


most interesting part is to guess how many here complaining about a poor ecosystem on Linux audio, actually work professionally on the field

Surge-XT, Vitalium and/or PlugData or Cardinal can get you so far on the synthesis world that maybe not even a full dedicated lifetime can explore everything you can do with it... Ardour isn't a shinning pot for MIDI editor generational features but if you actually know music-theory, it works pretty solid for writing. the in-line editor makes very much sense, just like sheet music can hold an orchestra info. on a sigle page


I am not familiar with Digital Audio Workstation, but this seems a good use case for WinApps.

I'm sure undefined behavior counts as an exception in the meaning intended in TFA. Example:

  int inv(int x) {
    return 1/x;
  }


No, it certainly doesn't.


I can confirm that wincompose works like a charm with Windows 11. I map it to the Insert key, whose original purpose still evades me.

I also want to say that once you try the Compose way, you begin laughing at the disproportionate efforts so many people have dedicated to the chimera of finding the perfect layout.


Yep, I use compose key and originally used Italian keyboards, but that was very annoying for coding. I use US keyboard layouts now and compose key.

My only annoyance is that Symless Synergy for some reason doesn't trigger the Compose key on Windows when using it as secondary computer


How do you cope with parentheses being just one key off between US and most European layouts (above 9 and 0 vs. 8 and 9)? I mean, I can retrain my muscle memory to a completely new position, but just one key to the side is unbearable!


It was terrible, that's why I switched to US keyboards. But the biggest problems are shortcuts. All applications assume you have a US keyboard layout, some shortcuts are impossible on the Italian layout.

And even with the Italian layout, I still have no idea how to type an uppercase accented E (È).

With compose key, that's trivial!


> One might feel that normal keyboards don't have a compose key.

On the other hand, normal keyboard have an insert key which serves no purpose and can thus be remapped to compose.


I feel the same way about Caps Lock!


In fact zero-based has shown some undeniable advantages over one-based. I couldn't explain it better than Dijkstra's famous essay: http://www.cs.utexas.edu/~EWD/ewd08xx/EWD831.PDF


It's fine, I can see the advantages. I just think it's a weird level of blindness to act like 1 indexing is some sort of aberration. It's really not. It's actually quite friendly for new or casual programmers, for one.


I think the objection is not so much blindness as the idea that professional tools should not generally be tailored to the needs of new or casual users at the expense of experienced users.


Is there any actual evidence that new programmers really find this hard? Python is renowned for being beginner friendly and I've never heard of anyone suggesting it was remotely a problem.

There are only a few languages that are purely for beginners (LOGO and BASIC?) so it's a high cost to annoy experienced programmers for something that probably isn't a big deal anyway.


I think the claim might harken back to the days when programming was a new thing and mathematicians,physicists,etc were the ones most often getting started at it, if they had by training gotten used to 1 based indexing in mathematics it was probably a bit of a pain to adapt (and why R and Matlab,etc use 1-based indexing).

Thus, 1 probably wasn't "easier", it just adhered to an existing orthodoxy that "beginners" came from at the time.


> Lua has a crucial feature that Javascript lacks: tail call optimization.

I'm not familiar with Lua, but I expect tco to be a feature of the compiler, not of the language. Am I wrong?


You’re wrong in the way in which many people are wrong when they hear about a thing called “tail-call optimization”, which is why some people have been trying to get away from the term in favour of “proper tail calls” or something similar, at least as far as R5RS[1]:

> A Scheme implementation is properly tail-recursive if it supports an unbounded number of active tail calls.

The issue here is that, in every language that has a detailed enough specification, there is some provision saying that a program that makes an unbounded number of nested calls at runtime is not legal. Support for proper tail calls means that tail calls (a well-defined subgrammar of the language) do not ever count as nested, which expands the set of legal programs. That’s a language feature, not (merely) a compiler feature.

[1] https://standards.scheme.org/corrected-r5rs/r5rs-Z-H-6.html#...


Thank you for the precise answer.

I still think that the language property (or requirement, or behavior as seen by within the language itself) that we're talking about in this case is "unbounded nested calls" and that the language specs doesn't (shouldn't) assume that such property will be satisfied in a specific way, e.g. switching the call to a branch, as TCO usually means.


Unbounded nested calls as long as those calls are in tail position, which is a thing that needs to be defined—trivially, as `return EXPR(EXPR...)`, in Lua; while Scheme, being based around expressions, needs a more careful definition, see link above.

Otherwise yes. For instance, Scheme implementations that translate the Scheme program into portable C code (not just into bytecode interpreted by C code) cannot assume that the C compiler will translate C-level tail calls into jumps and thus take special measures to make them work correctly, from trampolines to the very confusingly named “Cheney on the M.T.A.”[1], and people will, colloquially, say those implementations do TCO too. Whether that’s correct usage... I don’t think really matters here, other than to demonstrate why the term “TCO” as encountered in the wild is a confusing one.

[1] https://www.plover.com/misc/hbaker-archive/CheneyMTA.html


Cheney on the MTA is a great paper/algorithm, and I'd like to add (for the benefit of the lucky ten thousand just learning about this) that it's pun on a great old song: Charlie on the MTA ( https://www.youtube.com/watch?v=MbtkL5_f6-4 ). The joke is that in both cases it will never return, either because the subway fare is too high or because you don't want to keep the call stack around.


Why do you think that?


Because that's a description of the intended behavior, and I reason about a language as an abstraction that allows one to express an expected behavior ignoring the implementation details.

I know it's not universal: some languages in their infancy lack a formalization and are defined by their reference implentation. But a more theoretical approach has allowed languages like C to strive for years.


I sort of see what you are getting at but I am still a bit confused:

If I have a program that based on the input given to it runs some number of recursions of a function and two compilers of the language, can I compile the program using both of them if compiler A has PTC and compiler B does not no matter what the actual program is? As in, is the only difference that you won’t get a runtime error if you exceed the max stack size?


That is correct, the difference is only visible at runtime. So is the difference between garbage collection (whether tracing or reference counting) and lack thereof: you can write a long-lived C program that calls malloc() throughout its lifetime but never free(), but you’re not going to have a good time executing it. Unless you compile it with Fil-C, in which case it will work (modulo the usual caveats regarding syntactic vs semantic garbage).


I think features of the language can make it much easier (read: possible) for the compiler to recognize when a function is tail call optimizable. Not every recursion will be, so it matters greatly what the actual program is.


It is a feature of the language (with proper tail calls) that a certain class of calls defined in the spec must be TCOd, if you want to put things that way. It’s not just that it’s easier for the compiler to recognize them, it’s that it has to.

(The usual caveats about TCO randomly not working are due to constraints imposed by preexisting ABIs or VMs; if you don’t need to care about those, then the whole thing is quite straightforward.)


I don't think you're wrong per se. This is a "correct" way of thinking of the situation, but it's not the only correct way and it's arguably not the most useful.

A more useful way to understand the situation is that a language's major implementations are more important than the language itself. If the spec of the language says something, but nobody implements it, you can't write code against the spec. And on the flip side, if the major implementations of a language implement a feature that's not in the spec, you can write code that uses that feature.

A minor historical example of this was Python dictionaries. Maybe a decade ago, the Python spec didn't specify that dictionary keys would be retrieved in insertion order, so in theory, implementations of the Python language could do something like:

  >>> abc = {}
  >>> abc['a'] = 1
  >>> abc['b'] = 2
  >>> abc['c'] = 3
  >>> abc.keys()
  dict_keys(['c', 'a', 'b'])
But the CPython implementation did return all the keys in insertion order, and very few people were using anything other than the CPython implementation, so some codebases started depending on the keys being returned in insertion order without even knowing that they were depending on it. You could say that they weren't writing Python, but that seems a bit pedantic to me.

In any case, Python later standardized that as a feature, so now the ambiguity is solved.

It's all very tricky though, because for example, I wrote some code a decade that used GCC's compare-and-swap extensions, and at least at that time, it didn't compile on Clang. I think you'd have a stronger argument there that I wasn't writing C--not because what I wrote wasn't standard C, but because the code I wrote didn't compile on the most commonly used C compiler. The better approach to communication in this case, I think, is to simply use phrases that communicate what you're doing: instead of saying "C", say "ANSI C", "GCC C", "Portable C", etc.--phrases that communicate what implementations of the language you're supporting. Saying you're writing "C" isn't wrong, it's just not communicating a very important detail: what implementations of the compiler can compile your code. I'm much more interested in effectively communicating what compilers can compile a piece of code than pedantically gatekeeping what's C and what's not.


Python’s dicts for many years did not return keys in insertion order (since Tim Peters improved the hash in iirc 1.5 until Raymond Hettinger improved it further in iirc 3.6).

After the 3.6 changed, they were returned in order. And people started relying on that - so at a later stage, this became part of the spec.


There actually was a time when Python dictionary keys weren't guaranteed to be in the order they were inserted, as implemented in CPython, and the order would not be preserved.

You could not reliably depend on that implementation detail until much later, when optimizations were implemented in CPython that just so happened to preserve dictionary key insertion order. Once that was realized, it was PEP'd and made part of the spec.


Are you saying that Lua's TCO is an accidental feature due to the first implementation having it? How accurate is that?


What? No, I'm definitely not saying that.

I'm saying it isn't very useful argue about whether a feature is a feature of the language or a feature of the implementation, because the language is pretty useless independent of its implementation(s).


If the language spec requires TCO, I think you can reasonably call it part of the language.


It wouldn't be the first time the specs have gone too far and beyond their perimeter.

C's "register" variables used to have the same issue, and even "inline" has been downgraded to a mere hint for the compiler (which can ignore it and still be a C compiler).


inline and register still have semantic requirements that are not just hints. Taking the address of a register variable is illegal, and inline allows a function to be defined in multiple .c files without errors.


"inline" was always just a hint


IIRC, ES6+ includes TCO, but no actual implementation/engine has implemented it.


Safari has


The real question is, why does Python even have parentheses? If semantic indent is superior to braces, it ought to beat parentheses, too. The following should yield 14:

  a = 2 *
    3 + 4


Also, don't forget that python has;

  list = [
    1,2,3,
    [ 4, 5 ],
    6
  ]
Without this Python would basically have to be Yaml-ish Lisp:

  =
    a
    *
      2
      +
        3
        4
Let's drop the leading Yaml dashes needed to make ordered list elements. So we have an = node (assignment) which takes an a destination, and a * expression as the source operand. *'s operands are 2 and a + node, whose operands are 3 and 4.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: