I want to be able to run code from untrusted sources (other people, users of my SaaS application, LLMs) in an environment, where I can control the blast radius if something goes wrong.
Hey Simon, given it's you ... are you concerned about LLMs attempting to escape from within the confines of a Docker container or is this more about mitigating things like supply chain attacks?
I'm concerned about prompt injection attacks telling the LLM how to escape the Docker container.
You can almost think of a prompt injection attack as a supply chain attack - but regular supply chain attacks are a concern too, what if an LLM installs a new version of an NPM package that turns out to have been deliberately infected with malware that can escape a container?
When you use docker you can have full control over the networking layer already.
As you can bound it's networking to another container that will act as proxy/filter. How WASM offer that?
With reverse proxy you can log requests, or filter them if needed, restrict the allowed domains, do packet inspection if you want to go crazy mode.
And if an actor is able to tailor fit a prompt to escape docker, I think you have bigger issues in your supply chain.
I feel this wasm is bad solution. What it brings a VM or docker can't do?
And escaping a docker container is not that simple, require a lot of heavy lifting and not always possible.
Aside from my worries about container escape, my main problem with Docker is the overhead of setting it up.
I want to build software that regular users can install on their own machines. Telling them they have to install Docker first is a huge piece of friction that I would rather avoid!
The lack of network support for WASM fits my needs very well. I don't want users running untrusted code which participates in DDoS attacks, for example.
You have the same lack of network support with cgroups containers if you configure them properly. It isn't as if it's connected and filtered out, but as though it's disconnected. You can have it configured in such a way that it has network support but that it's filtered out with iptables, but that does seem more dangerous, though in practice that isn't where the escapes are coming from. A network namespace can be left empty, without network interfaces, and a process made to use the empty namespace. That way there isn't any traffic flowing from an interface to be checked against iptables rules.
I think that threat is generally overblown in these discussions. Yes, container escape is less difficult than VM escape, but it still requires major kernel 0day to do; it is by no means easy to accomplish. Doubly so if you have some decent hygiene and don't run anything as root or anything else dumb.
When was the last time we have heard container escape actually happening?
Just because you haven't heard of it doesn't mean the risk isn't real.
It's probably better to make some kind of risk assessment and decide whether you're willing to accept this risk for your users / business. And what you can do to mitigate this risk. The truth is the risk is always there and gets smaller as you add several isolation mechanisms to make it insignificant.
I think you meant “container escape is not as difficult as VM escape.”
A malicious workload doesn’t need to be root inside the container, the attack surface is the shared linux kernel.
Not allowing root in a container might mitigate a container getting root access outside of a namespace. But if an escape succeeds the attacker could leverage yet another privilege escalation mechanism to go from non-root to root
Better not rely on unprivileged containers to save you. The problem is:
Breaking out of a VM requires a hypervisor vulnerability, which are rare.
Breaking out of a shared-kernel container requires a kernel syscall vulnerability, which are common. The syscall attack surface is huge, and much of it is exploitable even by unprivileged processes.
They both can be highly unescapable. The podman community is smaller but it's more focused on solving technical problems than docker is at this point, which is trying to increase subscription revenue. I have gotten a configuration for running something in isolation that I'm happy with in podman, and while I think I could do exactly the same thing in Docker, it seems simpler in podman to me.
Apologies for repeating myself all over this part of the thread, but the vulnerabilities here are something that Podman and Docker can't really do anything about as long as they're sharing a kernel between containers.
If you're going to make containers hard to escape, you have to host them under a hypervisor that keeps them apart. Firecracker was invented for this. If Docker could be made unescapable on its own, AWS wouldn't need to run their container workloads under Firecracker.
This same, not especially informative content is being linked to again and again in this thread. If container escapes are so common, why has nobody linked to any of them rather than a comment saying "There are lots" from 3 years ago?
Perspective is everything, I guess. You look at that three year old comment and think it's not particularly informative. I look at that comment and see an experienced infosec pro at Fly.io, who runs billions of container workloads and doesn't trust the cgroups+namespaces security boundary enough so goes to the trouble of running Firecracker instead. (There are other reasons they landed there, but the security angle's part of it.)
Anyway if you want some links, here are a few. If you want more, I'm sure you can find 'em.
Some are covered off by good container deployment hygiene and reducing privilege, but from my POV it looks like the container devs are plugging their fingers in a barrel that keeps springing new leaks.
(To be fair, modern Docker's a lot better than it used to be. If you run your container unprivileged and don't give it extra capabilities and don't change syscall filters or MAC policies, you've closed off quite a bit of the attack surface, though far from all of it.)
But keep in mind that shared-kernel containers are only as secure as the kernel, and today's secure kernel syscall can turn insecure tomorrow as the kernel evolves. There are other solutions to that (look into gVisor and ask yourself why Google went to the trouble to make it -- and the answer is not "because Docker's security mechanisms are good enough"), but if you want peace of mind I believe it's better to sidestep the whole issue by using a hypervisor that's smaller and much more auditable than a whole Linux kernel shared across many containers.
I mean docker runs in sudo privileges for the most part, yes I know that docker can run rootless too but podman does it out of the box.
So if your docker container gets vulnerable and it can somehow break through a container, I think that with default sudo docker, you might get sudo privileges whereas in default podman, you would be having it as a user run executable and might need another zero day or smth to have sudo privilege y'know?
Everyday we grow closer to my dream of having a WASM based template engine for Python, similar to how Blazor takes Razor and builds it to WASM. I might have to toy with this when I get home.
Building packages with C/C++ extensions is still a bit tricky but you can see a list of all prebuilt packages for wasmer at https://pythonindex.wasix.org .
numpy is available there, scipy not (yet).
Wow, this is the key. If it just had python that’s not as useful but the major frameworks is the real value. Definitely going to keep an eye on this. I built a sandbox with deno for ai code generation. It works well enough but there are some use cases where python may make more sense. Nice!
How long should it take for "wasmer run python/python" to start showing me output? It's been hung for a while for me now (I upgraded to wasmer 6.1.0-rc.5).
"wasmer run python/python@=0.2.0" on the same machine gets me into Python 3.12.0 almost instantly.
OK got there in the end! I didn't time it but felt like around 10 minutes or more.
It did give me one warning message:
% wasmer run python/python
Python 3.13.0rc2 (heads/feat/dl-dirty:152184da8f, Aug 28 2025, 23:40:30) [Clang 21.1.0-rc2 (git@github.com:wasix-org/llvm-project.git 70df5e11515124124a4 on wasix
Type "help", "copyright", "credits" or "license" for more information.
warning: can't use pyrepl: No module named 'msvcrt'
>>>
The close to 'native' Python performance looks promising!
Just want to point out that this section avoids mentioning the best way to do it:
> AWS Lambda doesn't natively run unmodified Python apps:
>
> - You need adapters (such as https://github.com/slank/awsgi or https://github.com/Kludex/mangum) for running your WSGI sites.
> - WebSockets are unsupported.
> - Setup is complex, adapters are often unmaintained.
AWS provides https://github.com/awslabs/aws-lambda-web-adapter which is a) supported and b) written Rust, providing a translation of Lambda requests back into HTTP so you can use your usual entry point to the WSGI app. It is simple to set up.
WebSockets still not supported of course, but the issue of adapters is solved.
However it's worth point that due to the concurrency model of AWS Lambda (1 client request / ws message = 1 lambda invocation / one process only ever handles one request at a time before it can handle the next one), you would end up spawning much more AWS Lambda instances than you would with Cloudflare workers or Wasmer Edge.
There are cost implications obviously, but AWS lambda works this way also to make concurrency and scaling "simpler" by providing an easier mental model. Though much more expensive in theory
FFI support (like they have) is essential for any alternative Python to be worthwhile because so much of what makes Python useful today is numpy and keras and things like that.
That said, there is a need for accelerating branchy pure-python workloads too, I did a lot of work with rdflib where PyPy made all the difference and we also need runtimes that can accelerate those workloads.
Nice, but every time I look into WASM I have to wonder if containers and/or lite weight VMs wouldn’t be simpler and have less restrictions. We seem to have forgotten about microkernels and custom runtimes (like the various Erlang ones) as well…
Still, that close to native Python is an interesting place to be.
Are we at the point where I can store arbitrary scripts in a sql database and execute them with arguments, safely in a python sandbox from a host language that may or may not be python, and return the value(s) to the caller?
I'd love to implement customer supplied transformation scripts for exports of data but I need this python to be fully sandboxed and only operate on the data I give it.
Wasmer's approach hints at faster cold starts and better overall performance; the benchmarking against pyodide is a bit unclear, and it's unclear to me whether that would make or break viability for a use case like this.
But one thing this does make possible is if your arbitrary script is actually a persistent server, you can deploy that to edge servers, and interact with your arbitrary scripts over the network in a safe and sandboxed way!
That's almost exactly what I want to do too. I've experimented a bit with QuickJS for this - there's a Python module here that looks reasonably robust https://pypi.org/project/quickjs/ - but my ideal would be a WebAssembly sandbox since that's the world's most widely tested sandbox at this point.
Depending on the language, GC is either implemented in userspace using linear memory, or using the new GC extension to webassembly. The latter has some restrictions that mean not every language can use it and it's not a turnkey integration (you have to do a lot of work), but there are a bunch of implementations now that use wasm's native GC.
If you use wasm's native GC your objects are managed by the WASM runtime (in browsers, a JS runtime).
For things like goroutines you would emulate them using wasm primitives like exception handling, unless you're running in a host that provides syscalls you can use to do stuff like stack switching natively. (IIRC stack switching is proposed but not yet a part of any production WASM runtime - see https://webassembly.org/features/)
Based on what I read in a quick search, what Go does is generate each goroutine as a switch statement based on a state variable, so that you can 'resume' a goroutine by calling the switch with the appropriate state variable to resume its execution at the right point.
currently CPython's WASI build does not have asyncio support out of the box (at least according to [0]). This is, by my understanding, downstream of asyncio implementations in the standard library being built off of primitives around sockets and the like. And WASI, again by my understanding, does not support sockets.
In a browser environment there are theoretically ways you could piggyback off of the async support in the native ecosystem. But CPython is written to certain systems, so you're talking about CPython patches.
BUT the kind of beautiful thing is you can show up with your own asyncio event loop! So for example Pyodide just ships its own asyncio event loop[1]. This is possible thanks to Python's async infra just being built off of its generator concepts. async/await is, in itself, not something that "demands" I/O, just asyncio is.
Ideally, sure, but that would increase the already enormous burden of building a standards compliant web browser. For a healthy web ecosystem it's important that not only trillion dollar companies can contribute or compete.
Not every single website needs to support every single browser. This is a modern convenience, I was doing QA back in the day when we still had to support Internet explorer.
Internet explorer just didn't provide the same experience as Chrome.
You were supporting the tail end of an era that is universally agreed upon as an ecosystem failure. The internet didn't provide a consistent user experience for developers or for users, it generated mountains of legacy baggage, and it was frustrating for everyone.
For example if Firefox decides to add Rust support it doesn't mean every other browser needs to support it.
Just a handful of web experiences are going to be exclusive to Firefox. As is having Chrome as the only browser most people use isn't great for innovation.
Your comment is really relevant in the helium browser discussion. Its so on point.
People want different browsers so that chromium doesn't get to enforce their monopoly on web standards but I mean, its already happening. Like, if something runs on chrome and it doesn't run on firefox and is used by a lot of people...
Effectively firefox is ALSO forced to have those chromium features...
Basically the web standards is held hostage by chromium and we need a very heavy migration of large swathes of people away from chrome to something like firefox and that's whats being advocated I suppose.
I use zen / firefox because I also don't want chromium. I mean, idk if I have a particular reason except the above logic that I shared. honestly, idk to be honest.
You simply can't expect to run Roblox games inside of Chrome
Roblox can't generally be used to file your taxes.
But your visiting user created experiences.
The big problem is it's all controlled by one super company .
There's no reason we can't have an open source browser like which allows you to play various games, or run other sandbox applications. These applications could be programmed in a variety of different languages.
In this scenario, whatever I still need Chrome to handle certain important business, but I can use this alternate browser to engage with tons of other content.
I was actually thinking of creating a roblox alternative or atleast proposing the idea of modifying luanti which is open source to have roblox esque graphics.
So it would be the open source browser which allows you to play various games in some sense.
If you want sandbox applications, there is libriscv created by legendary fwsgonzo which can run on any device or wasm I suppose
Maintaining a browser is already hard enough, it's a very tough sell to convince 3+ browser vendors to implement a new language with its own standard library and quirks in parallel without a really convincing argument. As of yet, nobody has come up with a convincing enough argument.
Part of why WebAssembly was successful is that it's a way of generating javascript runtime IR instead of a completely new language + standard library - browsers can swap out their JavaScript frontend for a WASM one and reuse all the work they've done, reusing most of their native code generator, debugger, caches, etc. The primitives WASM's MVP exposes are mostly stuff browsers already knew how to do (though over time, it accumulated new features that don't have a comparison point in JS.)
And then WASM itself has basically no standard library, which means you don't have to implement a bunch of new library code to support it, just a relatively small set of JS APIs used to interact with it.
Every modern implementation I know of at least partially reuses the internals of the JS runtime, which enables things like cross-language inlining between WASM and JS.
Since they compiled the python interpreter to webassembly, yes you can now totally do a <python></python> webcomponent if you like.
Of course it requires the extra work of importing this interpreter.
Web browsers aren't going to come with multiple interpreters built-in, it would be too heavy.
I would be interested to see how short the time to run "Hello World" can be with python in a webpage, counting the time to load the whole page without cache.
If you transpile to javascript the performance will never exceed that of Javascript.
Typescript is a bit silly in that aspect because it removes all the types that the developers put in, they aren't used to improve time or memory performance at all.
I tried to understand what is "Wasmer Edge" but couldn't. They say on the front page "Make any app serverless. The cheapest, fastest and most scalable way to deploy is on the edge." and it seems like I can upload the source code of any app and they will convert it for me? Unlikely so.
Also it says "Pay CDN-like costs for your cloud applications – that’s Wasmer Edge." and I don't understand why I need to pay for the cloud if the app is serverless. That's exactly the point of serverless app that you don't need to pay for the servers because, well, the name implies that there is no server.
Confusingly, "Serverless" doesn't mean there's no server. It means that you don't have to manage a server yourself.
My preferred definition of serverless is scale-to-zero - where if your app isn't getting any traffic you pay nothing (as opposed to paying a constant fee for having your own server running that's not actually doing any work), then you pay more as the traffic scales up.
Frustratingly there are some "serverless" offerings out there which DO charge you even for no traffic - "Amazon Aurora Serverless v1" did that, I believe they fixed it in v2.
Still confusing, since infrastructure you don't have to manage yourself is sometimes called "managed". It makes sense from the perspective of "you are paying us to manage this for you".
It's a terrible name, but it's been around for over a decade now so we're stuck with it.
I mostly choose not to use it, because I don't like using ambiguous terminology if I can be more specific instead. So I'll say things like "scale-to-zero".
Normally, if you want to run your apps serverlessly you'll need to adapt your source code to it (both AWS Lambda and Cloudflare Workders require creating a custom HTTP handler).
In our case, you can run your normal server (lets say uvicorn) without any code changes required from our side.
Of course, you can already do this in Docker-enabled workloads: Google Cloud or Fly.io, for example. But that means that your apps will have long cold-start times at a way higher cost (no serverless).
Thank you for the explanation, now I can better see the differences between "serverless" platforms although I am still a little disappointed that so called "serverless" apps still require a (paid) server despite the name.
This bugs me all the time. Ethernet is serverless. Minesweeper is serverless. AWS Lambda is quite serverful, you're just not allowed to get a shell on that server.
I believe "serverless" in this sense means "like AWS lambda". Theoretically you upload some small scripts and they're executed on-demand out of a big resource pool, rather than you paying for/running an entire server yourself.
It seems like a horrible way to build a system with any significant level of complexity, but idk maybe it makes sense for very rarely used and light routes?
FastHTML requires apsw (SQLite wrapper) even if you don't use it.
We already compiled apsw to WASIX but it also requires publishing a new version of Python to Wasmer (with sqlite dynamically linked instead of statically linked).
We will release a new Python version by the end of this week / beginning of next one, so by then FastHTML should be fully work in Wasmer! (both runtime and Edge)
JuputerLite also does this. Uses local storage and Pyodide kernel (python on wasm). It has a special version of pip, and wasm versions of a lot of libraries which usually use native code (numpy etc). Super impressive.
Philosophically speaking I believe we should not require a special version of pip to install packages, nor a "lite" version of Jupyter to run in WebAssembly.
We should be able to run Jupyter fully within the Wasmer ecosystem without requiring any changes on the package (to run either in the browser or the server).
I’ve been looking at using lua for something like this: basically, users will be able to program robots in my lab (biotech) to do things, and I need a scripting language I can easily embed and control the runtime of in the larger system.
Lua is theoretically better in… almost every way, except everyone in bio uses python. So it could allow more easy modification of LLM generated scripts (not worried about the libraries because I mostly want to limit them: the scripts are mainly to just run robots, and you can have them webhook out if you need complicated stuff)
My question would be: would running a python sandbox vs a lua sandbox actually be appreciably better? Not sure yet, but will have to investigate with this new package (since it has Go bindings!)
Curious given you looked at both why you considered Lua to be better. I'd like to use Lua to teach freshmen and I need some arguments as to why it's better than Python.
Much better embedability and much easier to make it safely embedded. Doesn't require a wasm compilation, just can be in raw C, and that lua can directly integrate with host functions and vice versa - something even these wasm implementations struggle with.
I am so excited about python edge supported by wasm because I used python on cloudflare worker but there are so many limitations just simple pure python code supported.
The sandboxing benefits are real, especially for multi-tenant environments where you can't trust user code. Performance is still going to be hit-or-miss depending on the workload.
I'm not sure I understand correctly: is it a new serverless offering competing with the likes of vercel and fly.io, but with a different technology and pricing strategy? And the wasm container means that I can deploy my streamlit of FastAPI ETL apps without the Docker overhead or slowness of streamlit cloud?
Would it be possible to make it work on iOS or android? I always missed better support of python on mobiles. In the past used PythonKit rapid prototype and interop with Swift but had limited set of modules. Wish to use this in react native for interop between js and python
> Now, you can run any kind of Python API server, powered by fastapi, django, flask, or starlette, connected to a MySQL database automatically when needed
I assume this is targeting the standalone WebAssembly use case, we're not...running MySQL in browsers right?
Yeah, when I see these kinds of headlines about Python, I'm always left wondering what they mean by "fast". In this case, "fast" means "still slower than Python usually is".
Wouldnt be better to have sandboxing built directly to cpython? Why there is no such thing already "include" in cpython? Or maybe to create some limited sandboxed venv?
Does your solution support interop between modules written in different languages? I would love to be able to pass POD objects between Python and JS inside the same runtime.
For a backend project in Java, I use Jep for Python interoperability and making use of Python ecosystem. It gives me a "non-restricted" Python to have in my Java code, something I'm quite happy with. Wondering how this compares to that .
Since LLMs have made me so lazy that I never bother to search or read on my own, can someone tell me whether I can use uv as my project management tool with wasmer? What's the story here?
I simply CANNOT go back to use packages without uv, it would be unthinkable to me.
Actually, now that I think of it, my laziness might have started when I learned perl 30 years ago.
thanks. it'd be great to have a quick tutorial on doing so.
this is close to my dream of creating Frankenstein apps with the web platform instead of graal :)
I've been trying to find a robust, reliable and easy way to run a Python process in WebAssembly (outside of a browser) for a few years.