Great talk, thanks for sharing! I am looking for a way to finance building and maintaining the thing I want to build, and this talk was a nice reminder that donations actually CAN work. Otherwise, VCs are interested in an "Exit" event, but what if you as the founder are not? It doesn't sound like you are interested in exiting zig any time soon, either.
That's exactly right, and I already have eliminated that option for myself, since ZSF is a non-profit that is legally operated by a board that protects the mission statement of the organization.
Forgive me if this is written up somewhere, but how did you find board members? And once you have a board I can see how you would govern who gets on the board, but how do you solve the bootstrapping problem going from zero to board? Again, I'm sure these are basic questions but I don't even know the terms of art to know what to search for.
Also zig is cool and thank you for the technical and gubernatorial guidance you've given to other language authors all over.
Imagine how amazing our languages and tooling could be if there was a happy path for talented people to work on this.
For example, the error messages in Elm are just insanely better than in any other language I know. Like, it's not even in the same ballpark. Why doesn't every language have this?
But that's just an example. In general, I think we under-invest in tooling.
Oh, absolutely. This came up in a conversation I had with a friend last night about antitrust in tech. Not only do we underinvest in tooling but only the largest companies have the spare time and resources to do so.
I think this has led to many modern open innovations only realistically being useful for solving the problems of very large companies.
On a cynical day I might even accuse certain tools of being designed that way for job security. Or at least it looks that way because the majority of people who are using the tool could only dream of getting approval from management to make a better alternative.
The priorities of many of the tool maintainers, no matter how honest, just don’t match like they used to, and this is in an ecosystem awash with telemetry.
Can you give an example? I find Rust’s error messages to be also very good in most cases, but I’ve been in C/C++ land so long I don’t know what the state of the art actually looks like. Also, do you think Elm’s error messages come from a simpler language design? For example, some of Rust’s ability to provide better error messages come from the language improvements relative to C/C++ itself I think.
I'm fairly sure I read somewhere that Rust error messages were inspired by the great error messages popularized by Elm.
And in most cases, it's just a matter of sitting down and really thinking about what is the best error message you could provide in that specific scenario, what context is needed any so on. Instead of just writing down the first thing that comes to your mind, and never iterating on it ever again.
> For example, the error messages in Elm are just insanely better than in any other language I know. Like, it's not even in the same ballpark. Why doesn't every language have this?
It's a combination of economics (who pays for the work? It could be people scratching an itch, but these people also have to have the privilege of being able to pay their bills somehow), difficulty (it's not a trivial problem space, you need to turn your compiler into one that understands not only its own language but also every potential language people try, and the errors might crop up in unexpected places where the context you need to provide good errors is long gone, so you must change the compiler to a bowl of spaghetti design, so you can access, for example, the type checker during parsing) and inertia (developers are used to building tools for themselves, so they dont spend time on diagnostics because they are already familiar with the inner workings od the tool, then other developers use those tools and because every other tool is like that they don't expect good errors, so no one considers them important enough, and the cycle goes on and on).
But let me be clear on one thing: although there are many things in your language design and compiler implementation that you can do to make things easier on yourself (don't throw anything away related to the causal relationship for why a certain constraint is present!), the work is mostly elbow grease, and spit and polish, little more.
I don't think it's really a matter of having more people working on this stuff. For everyone who craves better ergonomics for the "easy" stuff, there's someone who demands ever more advanced features. It's not possible to keep both camps happy, even when there's immense financial investment. This is exactly my experience doing this work for ~5 years.
I got to see this talk irl and it was definitely one of the highlights of the conference. It did make me feel a little bad for how many times I’ve ragged on programming languages for having poor UX/tooling, even though it’s pretty unrealistic to expect a community driven project to invest the thousands of developer hours. I do wonder if there are ways to lower that burden, although that still doesn’t help with the cost of support and developer relations.
The Language Server Protocol is a newish thing that allowed some languages to gain more tooling for cheaper. I have a ton of respect for MS for doing this. They didn't need to do it that way. Tooling is still a ton of work and the protocol seems to have issue but looking the adoption it seems that a lot of project are finding value in it both on editors and language sides.
LSP is a double-edge sword. Indeed, it offers a quicker path from a zero IDE support position, which is a boon not only to language authors but also to the hosting IDE. MS I’m sure recognized the leverage potential of the latter. For them faster onboarding meant all the difference, and it worked.
The downside is the nature of LSP, which is to say it is necessarily a common denominator API. As a result a tool built on LSP-based API tends to be significantly inferior compared with similar tools built on comprehensive, proprietary APIs. For instance, the IntelliJ IDEA Open API illustrates this advantage quite well.
Thus, the economically rich tool authors maintain a clear advantage _if dev experience is the key differentiator_.
As the name implies, the IntelliJ Open API is open, as in open source. It only works with IntelliJ indeed because it's the IntelliJ API, but that doesn't make it not open.
True, it is not technically “proprietary”, but it is a perfect example of a polar opposite of LSP.
Utilizing the vernacular put forth in the talk, the sophistication of the IJ API is both the means and the cost of establishing a presence on the IntelliJ-dominant Language Tooling Avenue.
True, but I guess the amount of work that goes into a proprietary bespoke IDE is on another order of magnitude. For example our team maintains an LSP server for Go with only about 3 FTE, but I guess JetBrains has a much larger team devoted to Goland (a bespoke IDE), and benefits from common infrastructure with their IDEs for Java and other languages.
Yeah I've written about what I call "Tooling for Tooling", stuff like the LSP or tree-sitter that make it easier for language authors to build out their tooling ecosystem. I wish people invested more into these sorts of projects so we could raise the standard of tooling for everyone.
I felt like a danged genius when I got a competent Rust writer at work to help me put the Elm LSP or tree sitter or whatever it was to work in Difftastic. Standing on the shoulders of a few giants at the same time.
Yeah, as someone who's actively building a language right now, I can attest that the process is rough. Rough in the sense that I need to invest heavily into putting together my own set of tooling. It's a non-trivial exercise, to say the least.
Not who you asked, but for me (I use a custom language for work) it was a matter of being able to do what I wanted, in the way I wanted. The other added benefits were I could ensure that code I wrote for clients and for my employer met the ethical and professional obligations that I incur as an attorney. The only downside (according to others) is that my language implementation belongs to my employer, the theoretical foundation and language definition are owned by me. Overall, I feel more productive and I feel like the value I deliver is greater than it would be in another language.
Jokes aside, that’s a really interesting use case. I’ve often wondered how laws, and changes to them over time, are recorded. It seems like the kind of thing that could be made more efficient with a formal syntax along with version control.
I now regret that I didn’t have that thought myself. As to the encoding of laws formally, I know there are some existing attempts to achieve that, but that type of thing doesn’t really apply in my line of work. I do however have a database version of the statutes that keeps the textual history and is searchable implemented.
There are projects that try to do this, but you do have to use them.
LLVM is one. That's pretty successful.
The JVM is another. That's very successful, there are lots of languages that build on the JVM to reduce their tooling costs. Java, Kotlin and Clojure are examples of that.
GraalVM is another. If you use their Truffle framework then you get JIT compilation, debugging, tracing, instrumentation, different kinds of interpreters, advanced strings, some LSP support, language interop and FFIs and more.
But what you see, generally, language authors tend to start by hacking together a compiler and standard library starting from a C hello world program. Not that many survey the landscape and look for ways to reduce the costs.
Racket, PyPy and Parrot (RIP) are also designed to host multiple languages. BEAM is another interesting VM (fault-tolerance, distributed systems, live updates, etc.). GNU has a bunch of projects to help language implementers, like Lightning, Jitter, libjit, superopt, etc. Eclipse's Xtext lives in the same space as tree-sitter and (to some extent) LSP.
Yeah part of the issue is that language designers by definition have not-invented-here syndrome. So it takes a lot to convince them to not write it themselves. LLVM has solved a sufficiently hard problem that it’s gotten over this barrier, but other meta-tools will likely struggle to achieve that
IMO, LLVM is more successful than JVM in this metric. JVM has definitely had some powerhouses, but it's inherent limitations has meant that it is only able to produce languages that have many of the same limitations as Java. Since LLVM generates machine code, the limitations of LLVM compiled languages end up being a lot closer to the limitations of modern processors.
JVMs also produce machine code ;) In practice LLVM is designed for C/C++ like languages. You can use it for other things but not easily. GraalVM is the closest thing to a universal VM so far, but even then, it's more optimized for high level languages than stuff like Rust (even though it can run those).
It seems like the main problem, for implementers of languages significantly different to C and C-like languages, is that LLVM doesn't offer direct access to hardware registers or the stack, and it has built-in calling conventions. (Although, that's kind of the point of LLVM IR. GHC implementers wanted certain values to be always kept in registers, from what I understood of it.)
This was a problem in writing an LLVM backend for both the Glasgow Haskell Compiler and the Standard ML of New Jersey Compiler. It seems like it was easier for the GHC implementers, because they were already able to compile their semantic IR (the spineless tagless G-machine) into C--/Cmm, which conceptually isn't that different from LLVM IR. The SML implementers compiled to the continuation passing style, which is harder to map to LLVM IR. However, the GHC implementers still had to extend LLVM with a custom calling convention, so it was still challenging.
At this point, between Rust, Julia, and Swift, it is in pretty good shape. The biggest one that used to be a problem was handling of undefined behavior, but at this point, LLVM's poison system works quite well.
TruffleRuby is a good existance proof that it's not just that. (Clojure is another dynamic language, but it was designed to be hosted on the JVM so it's an easier case)
Though LLVM seems like it best maps the semantics of a language like C++, there's a fairly diverse lot of languages with LLVM backends, including Common Lisp, Haskell, Scala, and Swift.
Also got to see it live, agreed. Sat with my friend who gave a talk earlier about developing programming languages, and who was hoping for answers on how to get paid to do it. Funny how the talk ended.
I'm always rooting for projects like zig, odin, vale, nim, even jai. I think some killer apps are JS runtimes and defensive security, making eBPF more straightforward (and whatever tigerbeetle does... )
Interesting, I posted this a couple days ago but it now seems to have gotten into the second-chance pool.
I used to use Elm, but after all of their hostility to their own users (which you can see with a cursory search online) and the growing popularity of TypeScript, I stopped using it, even though TS is in some ways worse. But sometimes, worse is better, and sometimes, you just want to get things done.
I love this talk for a lot of reasons. Mostly because it crystalizes a lot of feelings I have been having about "Open Source" as a general concept. His conclusion that Open Source is for big companies and not small developers is a really compelling conclusion (IMO). The fact that a lot of open source projects support massive corporate rent-seeking (directly or indirectly) is a great example.
My spiciest take for 2023 is that the development community is going to start waking up to the realities of asymmetry between the kind of open source models made possible by the largess of massive billion dollar corporations and the kind of open source made possible by individual developers working on passion projects.
It probably helps that I am aligned with what Evan is saying in some meta sense. We really need to zoom out from the viewpoint of being the benefactors of seemingly "open" projects like Chrome. I even see in these comments some pot-shots at the Elm community and leadership for being extremely protective of their project when faced with unwanted and undesirable intrusions.
It is also interesting that this comes not long after I ran into Golang debating an updated json library for its stdlib [1]. What I want to say about that is, there is almost no universe I would be willing to put up with that community feedback. Any human with the stamina to respond to aggressive and persistent haranguing and nitpicking has more patience than I will ever have. So if the majority of criticism of Elm is that they exclude those kinds of people from their community then I totally understand it. Even if you paid me a 300k TOC FAANG salary there is no way I'm dealing with those kind of people, forget about it for a project of my own passion. Thank you for your interest, but please take your "help" anywhere else.
His conclusion is bleak and a bigger warning than most realize. How many people have amazing ideas on their laptops in a proof-of-concept state right now but they don't know what to do? One on hand is the specter of entitled expectation from a legion of idiots who have been conditioned on the open source models made possible by huge corporations. On the other are the corporations salivating at Apache, MIT and BSD licensed free software they can host for almost no cost. And as he points out - the current alternatives for funding are totally inadequate.
I don't think it's very charitable of you to assume that the massive amount of people who have left the Elm community are all, and I quote: "a legion of idiots who have been conditioned on the open source models made possible by huge corporations".
I get the point you're trying to make here, and I agree that funding from big corporations has fundamentally distorted the open-source landscape in ways that might not be the most desirable, but we can acknowledge that without invalidating people who believe that the Elm project has been mismanaged by just assigning them to some "those kinds of people" bin.
I'm not above using inflammatory language when I want to and I don't subscribe to tone-policing in general. But in the context of the paragraph you quoted, "a legion of idiots" refers explicitly to a specter in the mind of any person with code written that’s considering releasing it open source. It does not refer to any specific individual, situation, project, etc. It is a fear of the mob and of the dog-piling we so often see in open source communities.
But to continue on the topic you brought up - I am also not interested in calling out people who decide a project is not for them and decide to walk away. I am calling out people who jump into discussions about the core fundamentals of projects that they are not currently contributing to and turn those discussions in a hostile direction. Once they realize that their technical contributions are not wanted they turn to character and reputation attacks against the project owners and core contributors. Even if one out of one hundred users matches this description, you end up with a situation most sane individuals want to avoid at all costs.
So there is no invalidation of people who believe Elm went in the wrong direction. There isn't invalidation of people who wish to express that opinion publicly and even on project hosted forums, chats, bug-lists, etc. It is against those who go the "extra mile" and persistently push for changes even after they have been rejected and who resort to attacks on a person's character or reputation when they fail to get their way. And it is against those who defend such behavior by invoking some unspoken rules of open source development.
> There isn't invalidation of people who wish to express that opinion publicly and even on project hosted forums, chats, bug-lists, etc
This was and I believe still is disallowed on any communication venues controlled by the Elm core team. Any discussion of alternative designs or workarounds was shut down and deleted, regardless of tone.
That is fair enough. And sorry if I seemed like I was policing your tone, I was actually just disagreeing with a certain characterization of the situation.
Recently someone made a quote on twitter asserting that trying to live from open source without a proper business plan, is like hoping people listening for free music radio will actually buy the record.
Or as I like to point out, living as street artist.
Sure one does get some money out of it, yet on its own its isn't going to pay all the bills.
both of these people are making non-exclusionary public goods, so yeah, the parallel is valid. Lots of non-physical goods fit in the same category (when we haven't hamfistedly tried to make them exclusionary), our mechanisms for getting creators rewarded for their efforts suck.
We often think the critical skill is being able to create something of value, but unless it's charity (which has value, dgmw) the critical skill is being able to create something of value to others and then to protect the potential returns. Specifically, this means protecting it from thieves, absconders, seizures, penalties, etc. It's like people saying that good ideas are a dime a dozen, the important thing is execution. Well, one level beyond that is that "execution/schmexecution" - the important thing is capturing a return.
To that end, it's worth asking why our "mechanisms for getting creators rewarded for their efforts suck". Is it because copyright is too weak/too strong? Is it because patent process is overly burdensome/costly/a joke? I don't know, but I empathize with your points entirely and think it's worth thinking about.
Personally, I think there's room for more nuance in the commercial/non-commercial licensing dichotomy. Specifically, instead of using that as a breakpoint for charging licensing fees, maybe having some revenue threshold after which payments need to be made. I was about to say into a general fund to support open-source developers generally, but that would be ripe for corruption, rent-seeking, and self-dealing (like union administration). Better to go straight back to sponsor the project, IMO. Some products do this - free for organizations with less than x revenue. I've not really seen it much in open source though. Some tooling to make this "10 times easier" could be the next github.
But this problem has plagued inventors/creators of all eras. It does seem like we have better tools than ever to deal with it - telemetry, micropayments, etc.
A more optimistic take would be that open source is one way that big companies indirectly and somewhat inadvertently support small ones.
This means the code might be tackling problems that small companies don't have, though. It might be more complicated than you need.
Maybe it's bleak if you care about control issues (governance) or you want to make money off open source? But as a source of free, possibly useful stuff, it's pretty good.
About Dart: it only got several people working on it and given proper attention because of Flutter... before Flutter, it was really close to die after the bet on replacing JS in browsers didn't work out.
Flutter seems to be a revenue maker for Google (which should explain why it has gained so much attention - just look at the StackOverflow tag for Flutter, or their Youtube Channel - it's really, really active and the pub.dev package manager has grown immensely lately, to the point there's packages to do nearly anything in Dart/Flutter), but I am not sure how exactly given both Dart and Flutter are open source! Are the big users of Flutter giving Google big money?? Or how does that work?
I worked on the Dart team at around that time. After integrating Dart into Chrome was cancelled, the team got moved from Chrome to Ads, because Ads was migrating all their websites from GWT to Dart. This isn't something anyone outside Google cares about, but it meant that it was consistently staffed as they looked for other big customers.
Also, the ads codebase is enormous, so there are plenty of technical challenges from getting language infrastructure to scale to their needs. (Consider getting a language server for an IDE to scale up as a codebase grows, or making the language tools generate efficient JavaScript that's not unreasonably slow to download, or loading and displaying large tables in the browser.)
I have no idea whether Flutter generates revenue directly. I'd guess not, but internal usage of Flutter is likely enough to justify having a team devoted to maintaining and improving it.
I would guess easy integration with Google's cloud services like Firebase and promotion of these services within Flutter's docs, brings more costumers these services.
I find the "state of Elm" over the past couple of years absolutely fascinating. Some folks reasonably claim the language is dead and absolutely not suitable for production due to scarcity of updates, public roadmap, or bug fixes. Others run Elm in production and reasonably claim it's perfect/stable already. There's a steady stream of blog and forum posts on this argument. I read them all.
It’s because the only people left in the Elm community are those who are content with its current state and aren’t interested in having a stake in the future direction of core components or in even knowing whether there is a direction.
Agreed. It's amazing how much history is determined by small decisions. As another commenter said, Elm completely blew their chance, which is essentially down to the actions of one guy making what seem, from the outside, obviously bad decisions. At a meta level, there has long been demand for a modern ML but all the contenders have fumbled the bag in various ways. E.g. OCaml was "nah, multi-core isn't important" for at least a decade.
Blew their chance at what? And what trade-offs would have been necessary to take that chance?
Those questions are at the heart of this talk, and I think make it clear that there is a much larger range of definitions for "success" than is often assumed. And that perhaps we should question the default definitions. Perhaps the decisions, if not the ones you would have taken, are not so obviously bad (or are at least understandable) with that in mind?
Elm had a massive shot at making the big time circa 2015-2016. If they hadn't blown that shot, yeah, they'd be the kind of tool a lot of people use and complain about. But what I see when it comes to Elm is a lot of people who don't use the language shaking their heads in frustration at what they see as missed potential, while a handful of people who do use the language insist that it's perfect and unchangeable and that Evan could never ever make a bad decision, all the people who disagree must just be short-sighted.
As the sibling comment implies, the fact that we see fewer and fewer complaints compared to a few years ago leads me to believe that Elm usage is dying.
A talk on “the economics of programming languages” could go a number of different ways. I’d like to see a talk (or maybe a whole book) on how the evolution of programming languages over the last 50 years has made it possible for multiple trillion dollar companies to form and reshaped the whole global economy. Like, these things wouldn’t be possible if we were stuck with Fortran.
I think there’s an interesting discussion to be had about why all these programming tools (languages, frameworks, version control, other infrastructure) have made the job of software engineers so much easier, and many more people have entered the profession as a result, but the wages of software engineers has continued to rise.
> I think there’s an interesting discussion to be had about why all these programming tools (languages, frameworks, version control, other infrastructure) have made the job of software engineers so much easier, and many more people have entered the profession as a result, but the wages of software engineers has continued to rise.
More demand. As you’ve stated, we probably just wouldn’t have services like Google Docs if we were still writing code in Fortran. But with the advent of better languages and tooling, people replace old tasks (e.g. spreadsheets copywriting) with automation, and create new types of software (video games, social media, ads). Software engineers have become more efficient but there’s a need for more software engineers, because there’s a need for much more programming.
It’s the same reason people still work a lot despite having more and more automation: our expectations for a decent quality of life (housing, shopping choices, machines) have increased. Workers have become more efficient but there is much more work to be done.
A lot of it is inefficiently too. I believe there are a lot of “bullshit tech jobs” (see: ad creators), and software produced with real money which really just isn’t necessary. Similarly, there are bullshit jobs in other fields (see: call centers); things like huge houses and shopping choices which aren’t necessary,
and there’s a lot of waste (food, old stuff, products designed to fail), which all cause society as a whole to work much more than we really need to.
But it’s not just inefficiency. There are a lot of problems we simply couldn’t tackle with old technology (be it software, hardware, chemical manufacturing, healthcare) which we can tackle now. So, better tools don’t reduce jobs, they just change how the jobs are done and what they’re for.
I'm extremely keen to see what the SQL<->Elm experience might be (assuming something like that is part of the deal too). Simplifying how we interact with databases is arguably even more important/$valuable than simplifying UIs.
FWIW, I'm about to try integrating Elm and Tau Prolog (Prolog-in-JavaScript). I have a toy language that uses Elm for the interpreter and Prolog for the type inference and I want to make one of those nifty "try this language right in the browser" things.
The "relations" in Prolog and SQL ("relational model") are the same thing, and Prolog is arguably "nicer" than SQL?
I strongly endorse having some fun with Lamdera then. Your frontend and your backend are written in Elm. Your backend state is the state of the world, and it is stored in a DB. You write migrations that force you to have a safe translation between versions. They answer the question of "given a state of frontend and backend models and in flight messages, how do we construct new versions of these things?" And they can include "throw out that message" if you really want.
it's certainly not in the 0.19.1 compiler, but yeah you can compile anything into anything when you're the compiler writer (and he's a not bad compiler writer).
Lamdera, fwiw, is storing Elm states in a database right now. It's a closed source fork of the Elm compiler. It's honestly amazing, and it may be what Jeffs the official Elm on the DB. IDK.
I wasn't aware this was a current capability of Elm. I didn't see it advertised anywhere on its web pages or docs. I thought maybe I missed something, or if it's something only known to those on the Elm mailing list.
But it sounds like Elm is relatively easy to rewrite/reconfigure a backend to target a different language.
Zig is a really interesting case... it seems Andrew Kelly has been able to get from a small Patreon donation model to support his own modest salary, to a Foundation that is sponsored by several big names and has enough funds to pay several people to work on Zig full time.
I've watched Elm for many years from a distance, and I think that with a little more "social skills", Evan could've probably done the same as Elm had quite a lot of hype a few years ago... notice that even though Andrew may be a bit more charismatic (though I find Evan really nice too) he also has a bit of a hardline regarding features, specially requests by sponsors are not treated specially in any way - there's a talk from an Uber employee where he mention that after getting Uber to sponsor Zig, they asked about some issues they'd created and were told very, very clearly that sponsorships don't come with "perks" like having your issues given priority!
I think a key factor is that Evan seems to think about the (very) long term. With that in mind, relying on one person's sustained willpower and charisma probably isn't attractive.
That’s wrong though. Sponsorships that do come with perks low negotiating a mutually beneficial position. A valid take would have been to work out some kind of NRE scheme or contracting scheme.
I'll be kinda honest here, but one thing that pops into my head is to stop developing so many new languages and workflows. It's hard to build any sort of critical mass (paying or not) around hundreds or thousands of tiny communities.
I'm not saying there needs to be only one programming language out there, but I get overwhelmed at the fragmentation. I am someone who programs to get things done not to sit back in awe at the beauty of programming languages, and there have been small projects that I have wanted to do that were impossible because my dependencies were spread across different languages (some of which I was not familiar with).
You’re getting downvoted but I agree with this. Fragmentation tends to become a pathological situation where the ecosystem ends up consisting of mostly promising but half baked solutions and h th is gets dominated by big, boring, awful but corporate financed solutions.
I know a lot of people like Java but Java is exactly that.
This. So many languages I run across would have been better off as a domain-specific language hosted on top of Common Lisp. Instead of learning a new language, we could grow our vocabulary.
We need more languages, not fewer. We need all the experimental languages, the attempts at solving issues at the language level, to get to the next Good Language. We've got plenty of Good Languages right now, but we don't get others without a LOT of little languages getting made and inspiring the next ones.
You want to benefit off the work of others, and bear zero cost. What if you just built those libraries yourself, or paid someone else to do it for you?
And on language creation and development, why should we stop exploring, discovering and innovating? To make you more productive? What a joke.
> You want to benefit off the work of others, and bear zero cost. What if you just built those libraries yourself
No one can do that to any large scale. That is all there is to say about that.
And all my projects are open source. I give back to the community and to science (see below).
> To make you more productive?
There's intent, and then there's reality.
I work in scientific programming, and scientists are absolutely struggling with software development. Not programming, but everything related to package management, version control, distribution, different languages, incompatible ecosystems, etc.
There is complexity in the boundaries of packages, languages, and ecosystems. That is unsolved and no one seems to care about it as long as you can do this cool things with your own language.
Someone's intent could be to make python package management easier. The reality is that having only a few people adopt that can make the entire community suffer.
Fragmentation is a problem that solves itself, though. People tend to preferentially attach to larger communities and small communities tend to evaporate into a "hard core" of very dedicated people. If you look at the market as a whole, Java, JS, C# and other mainstream stuff is still most of everything. Whether that's good or not is a different issue.
If I may stand on your soapbox for a moment, changing economics is the whole point of computers and software.
We have Turing's Universal Machine, we have transistors. Science and capitalism have delivered technology and wealth. We won history.
The obvious thing to do is apply our technology and resources to provide a basic level of quality of life to everyone (i.e. knockout the lower levels of Maslow's hierarchy.) Build a simple and efficient regenerative machine to provide food, shelter, clothing, some medicines, building material, etc. It would look like a neighborhood.
Then people like Evan Czaplicki could just do whatever they wanted to without having to worry about making money. Also everybody else.
We do this because: A) We can. B) Not to do so is cruel.
Well, yeah, I wrote the article nearly 20 years ago. I still think the fundamental forces at work are pretty similar though.
The economics of creating a programming language are similar to most other open source software economics, although even proprietary languages need a way to get an initial user base. In some cases that's easier, as a large company can force a language for some important environment like writing iOS apps.
I really loved this talk and feel for Evan. As someone who was a VC/Angel investor in the space (I was the initial angel investor for something called LightTable/Eve) back in the day, worked for a couple of years at Red Hat, and am working on my own Open Source Language here: https://github.com/yazz/yazz (so yes, you could say I am a VC trying to build a low code product with my own hands), so I hope that I have valid opinions on this. I think that it is possible to make money in opensource as a little guy, but you need to have a combination of consulting, hosting, and support services. If your product is not able to encapsulate being sold and packaged as something that is possible to demo and sell to customers then you will most likely struggle to make a living from it. So for example, just making a single component (that could be built inhouse), or a new language (that could be substituted for another language) will be very hard to sell
Another very successful way to go about building a language is Imba.
Build a successful product with new lang https://scrimba.com, make sure the product's very hard to Jeff and take VC money.
Now you can work on the language as you please, and they can't Jeff you since nobody else can build something similar (not in a reasonable amount of time anyway)
P.S: taking VC money is optional. Scrimba is profitable so it wasn't necessary.
Your product is a bootcamp with a Javascript abstraction and you think Amazon can't replicate that?
How do you sell people on enrolling for a course that uses a language that isn't used by any other company, which makes it pretty useless outside of possibly teaching some high level concepts?
In the talk it’s short for getting Jeff Bezos’d, i.e., someone with deep pockets easily replicating what you’ve built on top of their own infrastructure (often cheaper for them because of that infrastructure) after you showed there was a market for it.
The talk is a nice combo of informative and entertaining.
I liked the ending of 'now that I've proven myself to be capable and likeable, someone give me money for this new thing I've been working on'. Very shrewd 'self marketing'.
For all this talk of free markets and whatever, we still live in a world of rich but bored and creative but poor.
He mentions it in the video. They couldn't rely on Elm, which was too young at the time, for their migration away from Flash. The scale of creating a language was too big for Prezi's immediate needs back then
I went to a JS conference at the time and the CTO of Prezi was talking to me about compiling Haskell to JS; he really wanted Haskell. I told him the performance won’t be good. He wasn’t happy with that answer, but I saw in the end that they went with Elm.
Case Study: Zig Software Foundation by Andrew Kelley
https://archive.org/details/fossy2023_Case_Study_Zig_Softwar...