Hacker Newsnew | past | comments | ask | show | jobs | submit | dakom's commentslogin

I love this, simple, fun, and easy-to-read source :)

FYI - there's a security problem with the leaderboard, I added the Testy McTester score of 42 to see if it would go through, feel free to delete it


thank you! no problem, honestly I thought if someone really went into my source they deserve to showcase it to others gg

This is the culmination of a long journey — it started in TypeScript, moved through WebGL1 with Rust bindings, then WebGL2, and now finally WebGPU. Woohoo :D

AwsmRenderer is a browser-native WebGPU renderer written in Rust, using web-sys directly (not wgpu). It’s intended to be high quality and ergonomic for typical gamedev use-cases, while keeping the API surface relatively small and explicit.

Fair warning: this is fresh off the push and only lightly tested so far. It requires a WebGPU-capable browser and is currently intended for desktop use.

Longer-term, the goal is to empower AAA-like gamedev in the browser with WASM. Internally, the API centers around keys that can be converted to/from u64, which should make it easier to move data across workers or future WASM component boundaries, and to integrate with physics engines or core game code.

Source: https://github.com/dakom/awsm-renderer

Curious to hear thoughts from folks with WebGPU, Rust, or browser graphics experience.


Curious, why use web-sys directly instead of wgpu?

Mostly because I want insight and control at a lower level, which breaks down into two different use-cases:

1. Debugging. The nature of bugs in this space is a lot more of "it doesn't look right on the screen" as opposed to "it breaks compilation", so I want to easily do things like peek into my buffers, use native js logging, etc. It's just a lot easier for me to reason about when I have more manual control.

2. Leaky abstractions. wgpu is a pretty low-level but it can't avoid the pain that comes with different backends, async where it should it shouldn't be, features that aren't available everywhere, etc.

That said, it would probably be pretty straightforward to convert the renderer into wgpu, most of the data structures and concepts should map cleanly


Fair enough.

For me, having wgpu's rust native api feels so much nicer than having to deal with the unergenomic web_sys api. Tradeoffs.


Yeah, I split the crates so `renderer-core` deals with the web-sys part, `renderer` is pretty much plain Rust (and wgsl with Askama templates)

I prefer this for 100% browser-only, but that's a niche. I do think wgpu makes more sense when you like the WebGPU headspace but want to target other backends (native, mobile, VR, etc.)


fwiw I'm happy to see this - been trying to tackle a hairy problem (rendering bugs) and both models fail, but:

1. Codex takes longer to fail and with less helpful feedback, but tends to at least not produce as many compiler errors 2. Claude fails faster and with more interesting back-and-forth, though tends to fail a bit harder

Neither of them are fixing the problems I want them to fix, so I prefer the faster iteration and back-and-forth so I can guide it better

So it's a bit surprising to me when so many people are pickign a "clear winner" that I prefer less atm


I wouldn't dismiss personal anecdotes any more than I'd dismiss someone claiming they saw something in a reputable journal but can't remember the citation.

In other words, yeah, don't just believe it at face value- but if you have good reason to trust the source, it's worth considering and checking into further.

In this case it's not just one individual but many people saying that THC and CBD are almost opposites if eachother, for example in how they affect anxiety.

Definitely worth proper research imho, could lead to medication that has more of the pros and less of the cons


Funny coincidence, I use this often and just opened an issue earlier today: https://github.com/go-task/task/issues/2303 :)


I just responded.

And thanks for your support!


Taskfile and Justfile are pretty solid.


I've never delivered a game anyone's really paid for, so take with a grain of salt, but imho the big win when you are writing your own stuff is you get to decide what not to include.

That sounds obvious but it really isn't. One example: maybe you don't need any object culling at all. Nobody tells you this. Anything you look up will talk about octrees, portals, clusters, and so on - but you might be totally fine just throwing everything at the renderer and designing your game with transitions at certain points. When you know your constraints, you're not guessing, you can measure and know for a fact that it's fine.

Another example: shader programming is not like other programming. There's no subclassing or traits. You have to think carefully about what you parameterize. When you know the look you're going for, you can hardcode a bunch of values and, frankly, write a bunch of shit code that looks just right.

The list goes on and on... maybe you don't need fancy animation blending when you can just bake it in an external tool. Maybe you don't need 3d spatial audio because your game world isn't built that way.

Thing is - when you're writing an _engine_ you need all that. You don't get to tell people writing games that you don't really need shadows or that they need to limit the number of game objects to some number etc. But when you're writing a _game_ (and you can call part of that game the engine), suddenly you get to tweak things and exclude things in all these ways that are perfectly fine.

Same idea applies to anything of course.. maybe you don't need a whole SQL database when you know your data format, flat files can be fine. Maybe you don't need a whole web/dom framework when you're just spitting out simple html/css. etc. etc.

I think this headspace is pretty common among gamedevs (iiuc large projects often copy/paste dependencies and tweak between projects rather than import and support a generic api too)


I agree with almost everything except:

> Thing is - when you're writing an _engine_ you need all that. You don't get to tell people writing games that you don't really need shadows or that they need to limit the number of game objects to some number etc. But when you're writing a _game_ (and you can call part of that game the engine), suddenly you get to tweak things and exclude things in all these ways that are perfectly fine.

When you're making an engine it's perfectly fine to bake in constraints. Probably most famously PICO-8 does that very intentionally and is written by just one person. Similarly RPGMaker and a bunch of other 'genre specific' game engines also do this. It's just that everyone tries to make something super general purpose which is really a Sisyphean task.


There's also some cool stuff brewing in the AVS space to allow nondeterminism, GPU, storage, and more.. for example, WAVS executes stuff off-chain in WASI components, and brings the result on-chain (secured by re-staking via Eigenlayer etc.)- so there's a roadmap to do things directly in wasi-gfx powered shaders and more low-level access like that


Cool. Trying to understand the value-add here, how does this differ from executing via wasmtime?


A Wasm component running inside of Wasmtime is just fine. However, when you start to use resources from outside of Wasm, e.g. systems, network interfaces, GPUs, etc., Wasmtime uses OS resources from the host that it is running upon. If this host is running on your trusted compute base, then it implies you are trusting the host implementations in Wasmtime, which for some is just fine. However, Hyperlight-Wasm gives platform builders the ability to describe the interface between the guest and the host explicitly, so you could only expose the host functionality you would want with the trusted implementation you'd want. For example, if I'm building a FaaS, I may want to provide only an exported event handler and an imported key/value interface to the guest for which I've built a safe, multi-tenant implementation and strictly disallow all other host provided functionality.


Good question. I think it’s the additional security? From [1]:

> Hyperlight is able to create new VMs in one to two milliseconds. While this is still slower than using sandboxed runtimes like V8 or Wasmtime directly, with Hyperlight we can take those same runtimes and place them inside of a VM that provides additional protection in the event of a sandbox escape.

[1]: https://opensource.microsoft.com/blog/2024/11/07/introducing...


I think they skipped the middlemam and run wasmtime under hypervisor without linux inbetween


It's the same reasoning that leads people to move things from running in an OS process to running in a VM. In theory, it adds security and better isolation. Hyperlight appears to substantially reduce the overhead of running VMs which makes it more appealing as a target if this fits your needs and you want the isolation of VMs.


bevy_math also uses glam, re-exported in the prelude: https://github.com/bevyengine/bevy/blob/cc69fdd0c63ea79fda4f...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: