Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Im surprised performance at this level even matters to most folks. Like if you truly thought this microbenchmark was the reason to choose one runtime over another Id be shocked. It makes it equally surprising that this error from the Deno crew, who should all know better.

In either case, I hope they announce a correction and move on to more important matters. If youre trying to shave another tiny bit of rps out of your boxes then thats an incredible success problem; not the kind 99.999 of companies will need.



You'd be surprised. I've had countless battles with (junior-wannabe-senior) devs who wanted to use a different framework simply because it is "fast". When you point out that this project will be a huge success if it has 10 req/s, the usual answer is "well it doesn't hurt", when it truth it does - if nothing else, because it diverts discussion from important matters (like consistency of the company's tech stack) to irrelevant ones.

Dino makers are well aware of this. An otherwise great Python framework (based on Starlette) is even named "FastAPI" in an attempt to use this to their advantage (it is great for other reasons, not because of its speed).

Unfortunately lots of devs are looking for silver bullets when it comes to speed, instead of detecting, determining, investigating and removing the bottlenecks.


I've thought of the fastapi name as that it's fast to get going with rather than speed, it's python after all.


Also what I thought … and indeed turns out to be the case in actual usage. It's quite fast to get a usable API up and running with FastAPI, starting from zero to the point of making useful requests to the API and getting back useful data. The actual speed of API access itself (the response times for the requests, etc) has never really been an issue I've wrestled with (me not being Twitter, etc. and not needing zillions of requests per second).


That's my take on it as well. Fast apu does use asyncio which may have speed avantages in some circonstances. But the main takeaway and the killer feature is the fact you build your api just declaring functions signatures.


Is it really that big of a deal that you can do @app.post("/foo") instead of @app.route("/foo", methods=["POST"])?


That's not what anyone is talking about.

Take this example from their docs:

    @app.get("/items/{item_id}")
    async def read_item(item_id: int, q: Union[str, None] = None):
        return {"item_id": item_id, "q": q}
You are declaring the types of the parameters, and FastAPI parses and enforces them for you. This saves quite a bit of code, and lets you focus on your business logic. Writing the code this way also allows FastAPI to generate a meaningful description of your API, which can be schema (such as Swagger) that tools can use to generate clients, or it can be documentation pages for humans to read. Any good documentation will still require you to write things, but this gets you further, faster than using something like Flask.

This example[0] takes these concepts even further.

Or just glance at the summary and see that it has nothing to do with @app.post.[1]

FastAPI has also properly supported async routes for longer than Flask, from what I understand.

(I've never personally used FastAPI for anything serious, since I have rarely used Python for anything other than machine learning for the past 5+ years, preferring to use Go, Rust, or TypeScript for most things, but I am aware of it, and seeing its claims misrepresented like that is mildly annoying. FastAPI is far more appealing to me than any other Python web framework I've ever seen, and I've only heard good things about it. Based on my experiences in other languages, their approach to writing APIs is absolutely a good one.)

[0]: https://fastapi.tiangolo.com/#example-upgrade

[1]: https://fastapi.tiangolo.com/#recap


It seems we are talking past each other.

Typing is the base for many reasons I love FastAPI, so yes it is useful. And I speak as someone who has used it in production on multiple projects, and even converted some from Flask (but not because of speed).

Far from "fast to get going", starting with FastAPI is slower actually, but it takes you further (as you pointed out). The train of thought that "typing" leads to "fast to get going" which leads to "fast" in the name... Let's say I don't buy it.

The link to "performance" is difficult to miss, I don't think that is a coincidence. And it's OK. If this is what matters to devs that much, they would be stupid not to highlight it. I'm actually happy people are using it, even if for the "wrong" reasons.


Properly supporting async seems like the main reason why it's taken off.


You think no one cares about the convenience of having the type system do a lot of the work for you? Or being able to autogenerate client libraries? I find that position confusing.

Proper async support is decently important to me in any language or framework, but in the real world, I haven't often run into other developers who care much about that.


Async in Python is a huge deal as it is in any language, yes. For very real reasons (cutting down on incredible amounts of confusing boilerplate) to very lame reasons (it's been memed into developer consciousness enough that it becomes a primary yes/no gate for development teams).


I assure you, a large part of using fastapi for my company was the integration with pydantic for easy validation.


It's not that. Fast API comes with a way to declare python types to get for free:

- URL params, forms and query string parsing and validation

- (de)serialization format choice

- dependency injection for complex data retrieval (this one is very underrated and yet is amazing for identification, authentication, session and so on)

- output consistency guarantees

- API doc generation


If the latter requires you to do your own mini router to handle each verb separately, then yes. It greatly improves readability and reduces boilerplate to have the ability to have one handler per path x verb


Maybe - in that case I'm judging them wrongly. But flask (the king they dethroned) was faster to get going (with less features, granted) and their docs feature "Performance" [0] rather prominently.

Note that I don't hold this against them. They simply understand what makes the devs pick them and adjusted their market strategy accordingly. It would be nice if they didn't have to though.

[0] https://fastapi.tiangolo.com/#performance


Flask is faster to get going in what way? Writing types instantly saves you a ton of time and effort right out of the gate.[0]

And if your framework is faster, then of course you're going to mention it. Do you really think Flask or Django wouldn't point out that they were fast, if they were? I'm quite sure they would, since it's not shameful to educate the reader on what your framework offers compared to the competition, but they can't, because they're not.

Your link goes to the very bottom of that page, so is it really prominently featured compared to everything else they're trying to sell you on? It really doesn't seem like it. More convincing would be pointing to their list of "key features" at the top, which does mention performance first, but then quickly focuses back on "Fast to code" and "Fewer bugs".

[0]: https://news.ycombinator.com/item?id=33224324


flask took the approach of only providing basic funtionality and relying on 3rd party packages for things like openapi docs and thr like. fastapi has a lot more included and it makes the common tasks like documenting your api easier and more consistent


Kinda devil's advocate, but I've met several senior-should-be-junior cargo-culting devs who swear by popular tools that are objectively slower than alternatives, instead of actually taking the time to evaluate the less popular alternatives to see if the lack of surrounding ecosystem will actually affect their project. The result is a death by a thousand cuts, because they auto-pilot to "what is everyone else using"?


Putting aside who-wants-to-be-what, a change in a common tech stack is a serious change with all sorts of implications. Arguments for and against should be carefully considered, and yes "going faster can't hurt" is not an argument.


Mostly agree; if you're a Java shop, maybe stick to Java instead of confusing all of your engineers just because you read on HN how much faster Golang can be. But again, YMMV.


> to see if the lack of surrounding ecosystem will actually affect their project

for projects of any significant size, that answer is a resounding "yes".


I think engineering is more nuanced than that, personally.


it's really only nuanced when you have small team and a very tightly scoped project.

for anything that can plausibly grow in scope and team size (which, let's be honest here, is most complex projects), it almost never makes sense to go without an existing ecosystem. it becomes difficult to hire, difficult to train, difficult to pass off maintenance, slows down velocity of shipping, makes your team gradually re-invent a worse version of the framework/tooling you initially tried to avoid, etc.

i've been on both kinds of projects. when i build something solo it's a work of art in code size, API consistency, and performance...and that feels truly amazing. but unfortunately it's not something that is feasible with bigger and more diverse-skillset teams. ever-growing scope and shipping features quickly usually means giving up performance and well thought out design.


I agree. PReact for instance comes to my mind. But it’s faster! And? And is it really in a business web app and not just a printf(« hello world »)? Does it matter that much? Do they have as many dev working on it? What about edge cases? More than anything for this type of change does your faster magic new thing has even an ecosystem?


The answer to all your questions are "it depends". Context matters. To be honest, Preact would probably be a better fit for most projects using React.


It doesn’t help that many tools, frameworks and etc advertise these kinds of numbers.

I find the reason for using a tool often isn’t what they list first in their technical documentation.

I can understand why someone might think those really are the reasons to use that tool.


> You'd be surprised. I've had countless battles with (junior-wannabe-senior) devs who wanted to use a different framework simply because it is "fast". When you point out that this project will be a huge success if it has 10 req/s, the usual answer is "well it doesn't hurt", when it truth it does - if nothing else, because it diverts discussion from important matters (like consistency of the company's tech stack) to irrelevant ones.

I think that we as an industry don't have the best hindsight.

I'll use enterprise Java as an example of a few common situations:

  - sometimes we go for Spring (or Spring Boot) as a framework, because that's what we know, but buy into a lot of complexity that actually slows us down
  - other times we might look in the direction of something like Quarkus or Vert.X in the name of performance, but have to deal with a lack of maturity
  - there's also something like Dropwizard which stitches together various idiomatic packages, yet doesn't have the popularity and tutorials we'd like
  - people still end up being limited by ORMs, which can speed up development and make it convenient, but have hard to debug issues like over-eager fetching
  - regardless of how fancy and "enterprise" your framework is, people still make data structure mistakes (e.g. iterating over a list instead of using a map)
  - if you've written a singleton app (runs just on a single instance) that's monolithic, your background processes will still slow everything down
And then, people wonder why it's hard to change their enterprise codebase and wave around their hands helplessly when their app needs at least 2 GB of RAM to even run locally and answering some simple REST requests needs close to 20 seconds and the app does about 2000 database queries to return a relatively simple list with some data.

When people should think about performance, they're instead busy "getting things done", when people should think about "getting things done" they're busy bikeshedding about which new framework would look best on their CV. And we even pick the wrong problems to solve, given that many (but not all) of the systems out there won't really have that stringent load requirements and just writing decent code should be our priority, regardless of the framework/technology/language.

I remember load testing a Ruby API that I wrote: on a small VPS (1 CPU core, 4 GB of RAM), it could consistently (over 30 minutes) serve around 200 requests/second with database interaction for each, which is probably enough for almost any smaller project out there. Doubling those resources by scaling horizontally almost doubled that number, with the database eventually being the limiting factor (which could have been scaled vertically as well). And that is even considering that Ruby is slow, when compared to other options. But even "slow" can be enough, when your code is decently written (and the scale at which you operate doesn't force your hand).


If you cache correctly I think 200 requests per second comes out to something like tens of thousands of active users


Read requests, sure. In that particular instance, I was testing a write heavy workload for my Master's Degree with K6 at the time: https://k6.io/

The idea was to see how performance intensive COVID contact tracking would be, if GPS positions were to be sent by all of the devices using an app, which would later allow generating heat maps from this data, instead of just contact tracing. Of course, I talked about the privacy implications of this as well (the repository was called "COVID 1984"), but it was a nice exercise to demonstrate horizontal scaling, the benefits of containerization and to compare the overhead of Docker Swarm with lightweight Kubernetes (K3s).

So yes, write heavy workloads are viable with Ruby on limited hardware, read heavy workloads can be even more easy (depending on who can access what data).


Bet they, the junior devs, can’t even optimise their SQL queries.


Maybe they should use a faster db like nosql

/S


You would think by now that SQL databases would be pretty good about optimizing any query it receives.


The last sentence kind of contradicts the preceding ones.

I agree that it’s harmful to distract from what actually matters: the core goal and competencies of the team.

But therefore we should hope devs look for silver bullets to address performance without having to be distracted by it. “It’s out of the box pretty good so I don’t have to think about caching or CDNs or load balancing until later” is deeply valuable.


I disagree. If we're talking of microbenchmarks focused on 100k rps or whatever that are mostly IO/syscall limited, sure.

But if it's that JS execution is outright faster, it's a big deal.

People here handwave "oh, your business doesn't require more than 10rps". Sure. But rps is just half the story, latency is the other. I'll give you two examples

1) SSR with something like Material UI is slow, especially because of the CSS-in-JS. The server rendering the page can easily take 200ms or more.

2) Modern backend stacks. On an API I have I use Prisma + Apollo GraphQL. Some queries take 500ms. These same queries but using REST and knex are <10ms. There is no slow SQL queries here or N+1 issues, it's just prisma and graphql executing a lot of JS.

In either case, the user experience is impacted because the website becomes slower. And a faster runtime would make these JS run faster, thus the web/api load faster.


You've given 2 of the worst (imo) regressions in frontend dev in the last decade.

If you're relying on innovation to make your existing tech stack not behave like dogshit when proven and trivial solutions exist, you're valuing the wrong things when choosing your stack.


They may be the worst, but they have now become extremely common to find in a variety of company projects. Not just FAANG or FAANG-adjacent but boring insurance or healthcare companies too.


Youre almost proving my point. This bechmark is focused on one tiny layer. It’s not executing complex graph queries. If you want to do a real comparison then the benchmark should be “apollo graph query performance on node vs deno vs bun” and then decide if its good enough to compare speed of feature delivery using knex. Even then the benchmark would need to be carefully crafted.


Hot take: If every layer of your stack focused on similarly useless performance improvements, the web would be a much better place.


You may be able to slash SSR from 200ms to 195ms by using a different JS runtime. Or you may slash it to 100ms by rewriting and optimizing code (caching and such). Resources are limited. You either do one or the other.


They are all V8 so will all be approximately the same speed of JS execution I imagine?


Bun uses javascript core.

There is also overhead in passing structures and other communication required so that layer can change the number as well.


They are not all V8


It matters to the guy paying the AWS bill... or anyone who cares about their ecological impact. We have a duty to utilize resources as efficiently as possible, no different than anyone else. Building every new project on top of a mountain of abstraction that pushes resource utilization to few orders of magnitude beyond what is actually necessary to do the job is financially stupid at the least and socially irresponsible at worst.

If you're an auto manufacturer and you discover something like fuel injection that will dramatically improve efficiency for your customers (the people paying that bill), not doing so makes you a terrible engineer. The 'developer velocity' argument is pure BS... there's absolutely no direct (or even really indirect) correlation there. If the ones you have need someone else to write 9 libraries so that they can build a REST API, you need better engineers.


For apps that are not successful, the entire output is waste, and the majority of the emissions burden is carbon output of the developers.

For work of speculative value (most startups...), optimising for dev efficiency is IMHO the right thing to do.


The only problem is that then you end up married to your mountain of abstractions. Designing things to work as intended from the outset is, in my experience, always the better path. It’s like ‘buy once, cry once’ for technical debt/effort.


Exactly. Very few companies will need to reach 1000 reqs/s consistently, let alone 100k reqs/s.

StackOverflow peaks at about 6000 reqs/s and it's an extremely popular website.


Except low concurrency is often found alongside slow response time.


Overly complex and feature-filled (or extremely barebones and "fast") frameworks can also have the property of giving no responses for additional months at a time. (i.e., sometimes "it works" is good enough, and our ego in design elegance doesn't need to get in the way of our need to keep existing as a business. If we need to rebuild or refactor later when we really know what we want, we can. :) )


got a source for this stackoverflow peak?

also wondering what peak RPS is for HN. i feel like most (non consumer) startups would be like "ok if its good enough for HN its good enough for me"


>got a source for this stackoverflow peak?

It is [1] ( and should be ) pretty well known. 1.3 BILLION page view per month, 6K RPS with 9 ( Fairly Weak ) Servers, Sub 20ms response time with zero caching.

>also wondering what peak RPS is for HN.

Less than 100 RPS for logged in users. The number were pre 2020 but I doubt the current number is significantly higher.

[1] https://stackexchange.com/performance


> Sub 20ms response time with zero caching.

I mean... to be clear, they do tons of caching[0], which is certainly critical for their ability to have a non-cached response time of 20ms. Most of their responses should be coming from a cache, given the type of site they run, otherwise they would need a lot more servers.

[0]: https://nickcraver.com/blog/2019/08/06/stack-overflow-how-we...


Their director of engineering did a podcast a couple of months ago:

https://hanselminutes.com/847/engineering-stack-overflow-wit...


...that runs on .NET. :^P


This microbenchmark in particular isn’t a reason I’d consider Bun. But the sum of many performance and DX considerations that have been put into Bun’s development—and that they are core motivating principles for the creator—certainly are.

As for the error, I suspect it was an innocent mistake. I see no reason Deno would choose to mislead, when they’ve generally very publicly responded to performance deficits by acknowledging them and then actually improving real performance.


Javascript is plagued with idea that it is slow while it is not. Many devs now have PTSR after arguing day after day that javascript is a good thing and not slow.

Performance is a very important thing in js world for a peace of mind of devs.


> Javascript is plagued with idea that it is slow while it is not.

I benchmarked a hello world in .net and node/express, and the .net version was multiple orders of magnitude faster than the node/express version. That's a starting point, and as you add more logic, that gap only grows in my experience. Javascript may be fast _enough_ for many cases, and in a tight JIT loop it may be faster again, but by any measure, js is not quick.


You’re not wrong but I do think it’s important to remember the context: people don’t tend to write math-heavy code in classic JS (there’s a side discussion about WASM now) so relatively few apps bottleneck on CPU - it’s wild when you see people going on about how they need to switch frameworks based on some microbenchmark of request decoding when 99% of their request processing time is some kind of database. I’ve seen more Node apps blow up on RAM usage than CPU because someone thought async would magically make their app faster without asking how much temporary state they were using.

Where I think there’s more of a problem is cultural: similarly to Java, there’s a subset of programmers seemingly dedicated to layering abstractions faster than the JIT developers can optimize them.


>> Where I think there’s more of a problem is cultural: [...] there’s a subset of programmers seemingly dedicated to layering abstractions faster than the JIT developers can optimize them.

This sounds extremely accurate to me


"Relatively few apps bottleneck on CPU" is a very 2010 opinion. With fast intranet (100Gbps+) and SSD, running business logic can become the bottleneck in many cases.

Unfortunately, I don't have readily available data to back up my claim neither.


I’m not saying it can’t happen, just that it’s pretty rare. SSDs are not infinite capacity and 100G networking isn’t common even in data centers - and more to the point, what really matters is latency: how many cycles can your CPU exercise in the time it takes for a round-trip network request is usually orders of magnitude greater than your business logic.

Again, not saying it never happens but I’ve rarely seen the kind of microbenchmark this story is about end up correlating with real application performance. I have seen developers get all fired up in some religious war and endanger their entire project trying to see benefits which never materialized, though, a common feature there was this focus on toy benchmarks rather than measuring the whole system or what they could do at the app level if they weren’t supporting some niche framework.


No realistic, decently written, js application would be "orders of magnitude" faster if rewritten in .net.


And until we have a feature parity moderately complex web app written in multiple languages to compare, we'll never know. In the meantime, all we have to go on is basic benchmarks, and I've not ever seen a _single_ benchmark that puts any js, framework or otherwise in the same ballpark as java, .net or go. When I do, I'll happily change my tune, but until then I'll have to stick with what all the numbers I've ever seen say - js is significantly slower.

One example is the techempower benchmarks Fortune section[0]. It's a fairly basic app, but it tests a full stack web app in multiple languages, and it's pretty clear that js is fimrly in the middle of the pack, far behind the compiled options. If you have any sources to the contrary, I'd love to see them.

[0] https://www.techempower.com/benchmarks/#section=data-r21&tes...


JavaScript ranks higher than C# in your benchmark.



Maybe not .net, but I've worked on 3D graphics in the browser and can say with confidance that rewriting your app in C or C++ could see orders of magnitude perf increase over JS.


Comparing a pure js implemention to code using opengl or webgl, I imagine several orders of magnitude difference is likely.

But a decent js implemention of 3D graphics-something would use one of the available tools for such applications. Making the difference considerably smaller.

Or is you experience different?


I've worked with a couple of the 'decent JS implementation of 3D graphics' libraries and, although they're not all like this, the ones I used were not built by people with experience doing low-level performance work. As such, they made some poor architectural decisions that prevented users of the libraries from doing some very basic optimizations that would have increased perf significantly.

The three major blockers I remember were:

1. Render contexts are created on the main thread and the user of the library gets no control over this. This means all driver overhead and library function calls block the main thread, which matters a lot when trying to hit 8ms/frame.

2. Loading textures asynchronously (in another thread, not Javascript async/await) was straight up impossible due to poor architecture. This means app startup was 500ms instead of 5ms. Maybe not a big deal to you, but our use case necessitated quick (a few frames at worst) startup.

3. The renderer used a scene graph, which was hilariously slow to traverse for large numbers of objects. Impossible to optimize by anyone as far as I can tell. Scene graphs just don't work well in JS.


There is often a speedup, but maybe not by orders of magnitude. I've usually seen 2x-3x or so.


I’m sorry, but I simply don’t buy that. You either measured something unrelated (e.g. framework), or it was a faulty benchmark for some other reason.


I'm afraid it was a while back so I don't have it to hand, but what I do have is the techempower benchmarks [0] which show about a 10x difference between asp.net or go, and all of the js options. I'm not going to claim they're perfect, and would be happy if you could provide some info that supports your argument?

[0] https://www.techempower.com/benchmarks/#section=data-r21


And yet the top js entry is above all c# and go entries. It takes the top overall on the composite score.

Not that it is representative of actual use cases. Can't use just-js in production as it's hyper optimized for this benchmark rather than a work horse. But it does provide a better view of what is possible if the work is put in.


Isn’t that just basically a js wrapper over a very optimized c++ lib though?


like bun or node or deno (Rust as well as C++).

in techempower, the vast vast majority of code running in the just-js entry is JavaScript. all the core libraries for networking and interacting with the OS are js wrappers around C++/v8. the http server, though incomplete and not production rady, is written in javascript, with http parsing handed off to picohttpparser. the postgres wire protocol is completely written in javascript. in fact, one of the advantages JS and other JIT languages have is you can optimize away a lot of unnecessary logic at run time when you need to. e.g. https://github.com/just-js/libs/blob/main/pg/pg.js#L241

the whole point of doing this was to prove that JS can be as fast as any other language for most real world web serving scenarios.

if i had more time to work on it, i am sure i could improve the fortunes score where it would be at or very close to the top of that ranking too.


Might call node.js the same thing. Deno has a rust shell and bun is zig.

Just-js had spent a lot of resources optimizing the input and output gateways to the V8 engine and it obviously pays off nicely. It does serve requests with the JS.

Is the boundary in the same place? Not familiar enough with the others to say exactly. But does it really matter?


JavaScript is ranked 5th on the ranking you linked to.

4 ranks before .Net.


I was hesitant as to how much I should go into this because you get into the semantics of the benchmark but this [0] thread goes into why - that particular implementation doesn't behave the same way as the other implementations, it uses a different db driver that doesn't synchronise, which won't be allowed in the next version of the benchmarks. Techempower publish regular snapshots of their benchmarks at [1], and if you look at any of the snapshots that aren't the last published set where the discrepancy was fixed you'll see that all of the js implementations lag far far behind.

[0] https://github.com/TechEmpower/FrameworkBenchmarks/issues/72...

[1] https://tfb-status.techempower.com/


i'm sorry but this is not true. postgres pipelining is not allowed in the benchmarks any more, and even when it was, just-js was completely compliant with the rules and it was other C++, PHP, Rust and C# frameworks that were non compliant.

the postgres driver was rewritten in JS because i spent so long benchmarking using the pg C driver and couldn't get the performance i needed from it. if you actually read the github thread you can see i even did a lot of work to verify the various frameworks were compliant with the requirements.

in round 21, postgres pipelining was disallowed for all frameworks and just-js/JavaScript is in first place. \o/

https://www.techempower.com/benchmarks/#section=data-r21&hw=...


Are you sure about your conclusion though? Apart from just-js missing in action in last 3 runs, here it's just fine

https://www.techempower.com/benchmarks/#section=test&runid=e...


they upgraded postgres recently and it uses a different default authentication mechanism which broke just-js. they seem to have stopped doing runs right now so just-js should re-appear when they start again.


Just-js is not really a Javascript: https://github.com/just-js/just. .Net in that list much closer to what everyone assumes.Net is.




it is! people who love bashing JavaScript always come up with this line that just-js "is not JavaScript". ¯\_(ツ)_/¯


Oh, come on! I am in fact a front end developer. And when I saw the result first time a few years ago, I was surprised and wanted to use this “just JS”, but the reality was quite far from what I was expecting. It might look like JS, but if you check the source code of the app for the benchmark, you’ll realise that it looks more like C or C++.

More details in this thread: https://github.com/just-js/just/issues/5


Express is a slowest solution on the market. Fastify can handle as much as 60k (!) requests per second per core(!): https://www.fastify.io/benchmarks/

For example, go can handle about the same amounts (be advised that this tests are for 4 cores, so results have to be divided): https://github.com/smallnest/go-web-framework-benchmark


I agree performance is important, but it’s optimistic for any dev to assume that this particular layer is the place where things will be slow. Introduce a single file or other 3rdparty IO dependent method on your HTTP response and poof


To be fair it did used to be quite slow. But that was like 10 years ago. General awareness hasn't caught up to the huge engineering efforts it seems.


It's been 14 years since the v8 engine was released. Node.js has been out longer than 10 years.


Ikr. You can regularly read some random dev tell the world "interpreted" Java is "too slow" for them.


Javascript isn't Java. Javascript follows ECMAScript, which also isn't Java. And ECMAScript isn't a language.


I don't know if they were making a JS joke but I have legitimately had newer programmers tell me that Java is an interpreted language because it compiles to a bytecode language which is interpreted by the JVM. Inversely I've had people argue that JS and Python are compiled languages because their interpreters convert statements into bytecode before executing them. When someone starts trying to argue those points I find its best to just give them a thumbs up and leave the conversation.


Describing Javascript can be confusing. C++ compiles -> C compiles -> assembly language compiles -> 1-for-1 to machine code. But Javascript be like "Javascript is the programming language interacts with your browser" or "Javascript conforms to the ECMAScript specification that describes how the language should act but is implemented according to the browser vendor's interpretation of said specification, and is further compiled according the browser." And that only covers browsers' Javascript.

And I'm not even sure if the above is 100% accurate.


But all else equal, wouldn’t you want the fastest option available? Also, it’s not just about raw qps. When a client connects to your app, you want them to receive data as quickly as possible so that they get the best user experience. That’s true wether you have 1 qps or 100,000. Having a development philosophy that every part of the stack must run quickly is attractive.


> all else equal

All else is never equal. The level of adoption and support is what drives decisions in the end. That's why everyone still uses Node when Bun is probably better in every way.


>use bun

>segfault at runtime

Bun is far from better in every way


Also I shouldn’t have to say this but all of the current JavaScript applications have already been written. Switching a large production codebase to a new framework does not dovetail well with modern dev practices, amplifying the pain of doing so.


I don't think application developers are the main target for their product. OTOH, if AWS/GCP/ETC. adopted Bun transparently to run your cloud functions faster and using less resources (thus less $$) that would be a win/win situation for all parties involved.


> But all else equal

This is the point GP is making - they (the "junior-wannabe-senior") aren't even thinking of all else, and by focussing purely on the speed of operations are probably using the lesser optimal solution. Facetious example: A is faster than B, shaving a few cycles here and there. But nobody knows how to use A, it's support is lacklustre, and there are many known vulnerabilities that haven't been fixed. B is the most widely used in the industry, support and security is good. Junior-wannabe-senior is picking A because it's faster.


Its tempting to assume that just because this number is high, that the rest of the dependencies required to meaningfully respond will be equally performant. That is rarely the case.

The challenge I have with these positions is that unless you have very specific latency requirements, most of the time youre better off focusing on solving a business problem and then measuring what is slow. Starting off with “well it has to be fast so lets use this brand new thing” is the swan song of the eventually remorseful.


Bun has a philosophy that everything needs to be fast. Including things like CLI tools and process startup. Process startup being quick is important, especially in a severless environment. I understand your point, but it’d be nice to limit the discussion to Bun and Deno and not other theoretical possibilities.


I appreciate that Bun wants to be fast, but what I need right now from Bun or Deno is a better concurrency primitive than sendMessage(). I’m so… angry that we waited all this time for a worker threads implementation in Node and what we got was this hot garbage. It’s a toy. There is no sane way to send multiple tasks to the same worker, because the results come back unordered and uncorrelated, unless you build your own layer on top of them. The public API should have been promise based, not event based.


Agreed. But the problem is not on "fast" but on "brand new". Sometimes, these are equal (because new things often advertising itself as fast alternative). Rare times when this is not equal, fast can be a good choice.


But if the problem is in the framework or the runtime, it is too late to think about the performance AFTER you have tied yourself all up to the slower one.


These sorts of things also build up over time. Usually when the underlying system is well thought out and performant it’s reflected in higher layers as well.


Yeah. Performance is rarely a concern. Although they are pushing it for serverless where micro benchmarks may matter if they are related to execution and startup time.

I think the benefit of deno or bun aren't as obvious when compared in the context of node on DX matter too.

Most of the tooling and standard library can be used without using the cli and switching runtime.

Tools like tsx simplifies running typescript code directly. It does pretty much what deno does internally using esbuild.

The modularity of runtime doesn't matter to consumers even if it's pretty cool.

FFI and security features are nice but I think the future is running sensitive code as a wasm module directly in separate isolated context.

The browser compatibility is an awesome boost but most bundler will polyfill that for you out of the box and you will use a bundler with either deno or node most of the time. I know polyfilling is not perfect but it's good enough for most.

I want to hear what strong reason people have for choosing to use either bun or deno in production.

I use deno for writing scripts because it's so easy to run them especially if they have any dependencies but outside of that, I haven't reached out for it.


Even when considering benchmarking errors, performance can be more objectively measured than developer ergonomics, good architecture, clear API and documentation, good implementation and other aspects that usually matter more than performance. It is usually the fun and immediately gamifiable aspect that more junior developers can and do easily optimize for.


In a world where engineers just keep piling on cloud toys and oh hey microservices to solve very common problems, pretending that database and HTTP roundtrips are basically free, your choice of framework and language should not even matter. This is what's wrong with this industry.



After a while you come to expect this to be the default state of the world. Then what surprises you is how often people believe benchmarks without asking to see the code.


This is so true.

Developers should focus more about time spent getting an application up and running (and to market) rather than how much time is spent serving an http request.


Claiming they're the fastest is a big marketing edge. It doesn't matter if they're only the fastest by 0.00001%.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: