Check out superconductor.dev (I’m building it), if you want live app previews, docker-in-docker functionality, multiple agents in one mobile app, and more.
> AFAICT, there’s no service that lets me: give a prompt, write the code, spin up all this infra, run Playwright, handle database migrations, and let me manually poke at the system. We approximate this with GitHub Actions, but that doesn’t help with manual verification or DB work.
- set up your environment any way you want, including using docker containers
- run any number of Claude Code, Codex, Gemini, Amp, or OpenCode agents on a prompt, or "ticket" (we can add Cursor CLI also)
- each ticket implementation has a fully running "app preview", which you can use just like you use your locally running setup. your running web app is even shown in a pane right next to chat and diff
- chat with the agent inside of a ticket implementation, and when you're happy, submit to github
(agents can even take screenshots)
happy to onboard you if that sounds interesting, just ping me at sergey@superconductor.dev
My org has built internal tooling that approximates this. It's incredibly valuable from a manual test perspective though we haven't managed to get the agent part working well, app startup times (10+ min) make iterating hard.
Do you have customers who have faced/solved this problem? If so, how did they do it -- it seems like a killer on the approach?
Our foundational design value was compute instance startup speed. We've made some design decisions and evaluated several "neocloud" providers with this goal in mind.
Currently, from launching an agent to that agent being able to run tests in our Rails docker-compose environment (and to the live app preview running), is about 30 seconds. If that agent finishes their work and goes to sleep, and then hours later you come back to send a message, it'll wake up in about the same time.
(And, of course, you can launch many agents at once -- they're all going to be ready at roughly the same time.)
will email! Your homepage doesn't make the environment part clear - it reads like it's akin to cursor multiple agent mode (Which I think you had first, FWIW).
This is exactly why we built superconductor.dev, which has live app preview for each agent. We support Claude Code as well as Gemini, Codex, Amp. If you want to check it out just mention HN in your signup form and I’ll prioritize you :)
Not yet! We haven't shared them publicly yet because our internal dataset is super biased. Keep your eyes peeled though! They'll be coming out in the next few weeks :)
If you find yourself in Madrid, you will not regret making time for the Prado. Whereas some museums, like the Hermitage, go for quantity, the Prado seems to pride itself on extreme curation. It did not feel like a huge museum (at least the tour of it that I did), but every single room had at least one true masterpiece.
I agree. Do also visit the Reina Sofia[1], mostly dedicated to (modern) Spanish art.
It's your chance to experience Picasso's Guernica[2] in real. To me it was one of the most moving and impressive, while also disturbing exhibits I've ever seen in any museum.
This is a toy example of the kind of problem that the field of Computer Vision is actively working on: object detection. In a (tiny) nutshell, our best answer for general images and objects is:
1) Instead of using the full color pixel image, use an "edge image" with some simple additional normalizations. If color is important, do this per color channel.
2) Create a dataset with as many cropped examples of the target object as you can find (mechanical turk is useful for annotating large datasets); every other crop of every image is a negative example.
3) Train a classifier (SVM if you want it to work, neural network if you're so inclined) using this dataset.
4) Apply the classifier to all subwindows of a new image to generate hypotheses of the target object location. This can be sped up in various ways, but this is the basic idea.
5) Post-process the hypotheses using context (can be as simple as simply finding the most confident hypotheses within a neighborhood).
If you're interested in object detection, an excellent recent summary of the recent decade of research is due to Kristen Grauman and Bastian Leibe: http://www.morganclaypool.com/doi/abs/10.2200/S00332ED1V01Y2... (do some googling if you don't have access to this particular PDF).
reply