I have observed this too, mostly for content that is changed in a symlinked directory, but also generally. I'm on Fedora Silverblue running Zed Preview as a Flatpak. It works great in most other ways though, snappy and beautiful, but the sandboxed environment provides some additional challenges.
It’s not just a nice to have. It’s a hard requirement. If I can’t launch agents within a sandboxed container, I can’t use your product - by security policy.
The Zed spirit is definitely to prefer a platform native solution.
You're right that we may be able to get rid of our WGSL implementation, and instead use the HLSL one via SPIR-V. But also, at some point we plan to port Zed to run in a web browser, and will likely build on WebGPU, where WGSL is the native shading language. Honestly, we don't change our graphics primitives that frequently, so the cost of having the three implementations going forward isn't that terrible. We definitely would not use MoltenVK on macOS, vs just using Metal directly.
Good point that we should publish a symbol server.
> But also, at some point we plan to port Zed to run in a web browser, and will likely build on WebGPU, where WGSL is the native shading language.
Except that everything has effectively converged to HLSL (via Slang which is effectively HLSL++) and SPIR-V (coming via Shader 7).
So, your pipelines, shader language, and IR code would all look mostly the same between Windows and Linux if you threw in with DX12 (which looks much more like Vulkan) rather than DX11. And you'd get the ability to multi-thread through the GPU subsystem via DX12/Vulkan.
And, to be fair, we've seen that MoltenVK gets you about 80-90% of native Metal performance on macOS, so you wouldn't have to maintain a Metal backend, anymore.
And you'd gain the ability to use all the standard GPU debugging tools from Microsoft, nVidia, and AMD rather than just RenderDoc.
You'd abandon this all for some mythical future compatibility with WebGPU--which has deployment counts you can measure with a thimble?
Did you consider using wgpu instead of writing a new dx11 renderer? It has metal, vulkan and dx12 backends so could have been used for a single renderer for macOS windows and Linux. (And webgpu in the future)
Isaac, that email that you sent to us (long after your internship ended) when Wasmtime first landed support for the WASM Component model was actually very helpful! We were considering going down the path of embedding V8 and doing JS extensions. I'm really glad we ended up going all in on Wasmtime and components; it's an awesome technology.
Yes, Wasm components rock! I'm amazed to see how far you've taken Wasm and run with it. I'm at a new email address now, apologies if I've missed any mail. We should catch up sometime; I'll be in SF this summer, I might also visit a friend in Fort Collins, CO. (Throwing distance from Boulder :P)
You can still include Text Threads as context in the inline assist prompt with @thread "name of thread", or using the `+` button. And it should suggest the active text thread for you, so it's one click. Let us know if that isn't working, we wanted to preserve the old workflow (very explicit context curation) for people who enjoyed previous assistant panel.
Yeah, we plan to revisit the collaboration features; it was painful but we decided we needed to pause work on it while we built out some more highly-requested functionality. We still have big plans for improving team collaboration!
It would be interesting to (optionally) make the AI agent more like an additional collaborative user, sharing the chat between users, allowing collaborative edits to prompts, etc.
The long game of agentic AI seems to be giving them a working environment that is fast, accurately (and safely!) tracks changes, and enables humans to observe its edit history and thinking process. Zed's collaborative features seem serendipitous for this role.
Not sure what your budget looks like, but maybe its time to look for a new developer if its feasible? So you don't neglect a feature that's already in production and broken.
I used `git add -p` until very recently (basically, until we built this feature in Zed). If you're using `add -p` and you notice a problem that you want to fix before committing, you need to go find that same bit of code in your editor, edit the code, then re-start your `add -p`. If you had chosen not to stage some preceding hunks, you need to skip over those again. Also, an editor can just render code much more readably than the git CLI can.
Just to clarify, you can run as many LSPs in a given file type as you want.
Common features like completions, diagnostics, and auto-formatting will multiplex to all of the LSPs.
Admittedly, there are certain features that currently only use one LSP: inlay hints and document highlights are examples. For which LSP features is multi-server support important to you? It shouldn't be too hard to generalize.