I'm not really sure why you'd say that OpenAPI isn't a JSON Schema document: there are published JSON Schema files on the official OpenAPI website.
See for example:
I set up spec-kit first, then updated its templates to tell it to use beads to track features and all that instead of writing markdown files. If nothing else, this is a quality-of-life improvement for me, because recent LLMs seem to have an intense penchant to try to write one or more markdown files per large task. Ending up with loads of markdown poop feels like the new `.DS_Store`, but harder to `.gitignore` because they'll name files whatever floats their boat.
I usually just use a commit agent that has as one of its instructions to review various aspects of the prospective commit, including telling it to consolidate any documentation and remove documentation of completed work except where it should be rolled into lasting documentation of architecture or features. I've not rolled it out in all my projects yet, but for the ones I do, it's gotten rid of the excess files.
First I hear of spec-kit, that looks very promising, I’m interested in trying it. My approach is to combine beads with superpowers skills
https://github.com/obra/superpowers I’m wondering how does it compare to this, gonna give it a try, thanks!
Probably. But it is important to understand that this will matter to fewer and fewer people over time.
Because they don't edit the data to make a new objective truth that survives scrutiny, they edit the data to demonstrate their power over data.
People referring to the archived data will simply be denied access to the conversation moving forward; "our opponents keep fighting old battles when the world has moved on".
It works. And it will continue to work shockingly well even when the underlying phenomenon asserts itself in ways that are predicted by the archive data. Look at how Florida is torn between climate change denial and the actual reality of sea-level rises affecting the Keys.
And there are plenty of people who can still be liked, or appreciated at least, who also were racist and misogynist, or whatever other moral defect you like. It's okay to show affection for someone who didn't perfectly fit the strictures of what a specific type of virtue signalling labels as correct.
On the contrary. My worldview has been and still is shaped by an ever changing learning curve of the world's nature. That includes being flexible enough to show some affection even for that which doesn't fit rigorous dogmas of conduct. Can you say the same about your labels?
To be fair, I absolutely would take ocean liners for this purpose if such a thing existed.
The closest that you can get on this front is basically seasonal route switches for cruise liners. The thing is, cruise lines price gouge on WiFi, so I'm not really able to work while taking the slow route, and I'm also having to pay food, lodging, etc.
The latency itself / limited freedom for a week or two, I don't mind. But it's the other expenditures and tradeoffs that are rather hard to stomach.
I mean this nicely, I don't really feel like anything on the landing page besides one paragraph is actually helpful to readers understanding what problem you're trying to solve.
I think it'd be useful to focus on this more:
> Voiden turns your API definitions and docs into dynamic, purpose-built interfaces — no fixed UI, no rigid templates. Everything is composed in one place and rendered down to Markdown, tailored to the API it serves.
Hmm, I appreciate this, for real. We'll have some more discussions on how to optimize the landing page.
Curious, how do you feel about the tagline stuff?
Happy to learn anything else you'd be willing to add here. We're all for making it crystal clear, no fluff, just making devs lives easier, which also means not wasting their time to understand whether they need it or not.
Not GP, but the tagline above the fold doesn’t tell anything related to the actual value prop. Modular, extensions etc are implementation details. Git-native? I had an “idea” of what that meant, but had to scroll down to confirm.
Not sure how else to reply to this one, but it is the same thing as in its definition. Think less of client-server, think more of Postman, but without gazillion tabs and with docs at the same place with your API endpoint definition, headers, body, etc.
I just downloaded it and tried it and I still don't understand what it does.
It seems to be some kind of wysiwyg editor? With elements specific to API docs?
But then why does that make it an "API client". I'm guessing "API" specifically means HTTP API here. But "client" is completely throwing me. An API client is just software that talks to an API. So what's with the wysiwyg stuff?
The Jupyter comparison isn't completely off base if you'd like to rationalise it that way. Similarity would be blending the code and the docs in a single file, where you can then also execute something.
By definition, API Client is a devtool that makes it easier for devs (& co.) to design, test, document, and debug APIs. If it's confusing, we can take it to Postman, but it's an industry standard, been that way for a long while.
> By definition, API Client is a devtool that makes it easier for devs (& co.) to design, test, document, and debug APIs. If it's confusing, we can take it to Postman, but it's an industry standard, been that way for a long while.
That's not the definition of "API client" I'm familiar with. In fact it feels like a very specific definition of "API client" - which is a broad term that I am familiar with.
(Why does it sometimes feel like I am not getting the memos that everyone else is getting? It's like when a new job description for an old job suddenly appears and everyone pretends that what it's always been called!)
Maybe you thought of an SDK-like client/wrapper for calling certain APIs, so it sounds natural to call it an API client?
Here you can check a list of currently OSS API clients (competitors to Voiden - the tool I posted about) https://github.com/stepci/awesome-api-clients Will join the list soon after we go OSS too. :)
I guess "API client" is often shorthand for "API client library" but that seems less of a stretch than using it to mean "App for calling, testing and documenting an API". A quick Github search seems to indicate that the former usage is more common in any case.
It's an expression. Meaning it's not just allowing you to use some git sync workaround, but actually use it as if you would in your terminal, respecting all of its commands and conventions.
I work on one of the largest Haskell codebases in the world that I know of (https://mercury.com/). We're in the ballpark of 1.5 million lines of proprietary code built and deployed as effectively a single executable, and of course if you included open source libraries and stuff that we have built or depend on, it would be larger.
I can't really speak to your problem domain, but I feel like we do a lot with what we have. Most of our pain comes from compile times / linking taking longer than we'd prefer, but we invest a lot of energy and money improving that in a way that benefits the whole Haskell ecosystem.
Not sure what abstractions you are wondering about, though.
What I'm wondering about is how maintainable programs of that size are over time. That you get get over a million lines says it is possible. However difficult is it though? Abstractions are just code for whatever it is needed to break your problems up between everyone without conflicts. How easy/hard is this?
For example, I like python for small programs, but I found around 10-50k LOC python no longer is workable as you will make a change not realizing that function is used elsewhere and because that code path isn't covered in tests you didn't know about the breakage until you ship.
It’s highly scalable. Part of the reason compile times are a bit long is that the compiler is doing whole program analysis.
Most of the control flow in a Haskell program is encoded in the types. A “sum type” is a type that represents choices and they introduce new branches to your logic. The compiler can be configured to squawk at you if you miss any branches in your code (as long as you’re disciplined to be wary about catch-all pattern matches). This means that even at millions of lines you can get away with refactorings that change thousands of lines across many modules and be confident you haven’t missed anything.
You can do these things in C++ code based as well but I find the analysis tooling there is building models where in Haskell the types are much more direct. You get feedback faster.
We have a pretty limited set of abstractions that are used throughout. We mostly serve web requests, talk to a PostgreSQL database, communicate with 3rd-party systems with HTTP, and we're starting to use Temporal.io for queued-job type stuff over a homegrown queueing system that we used in the past.
One of the things you'll often hear as a critique levelled against Haskell developers is that we tend to overcomplicate things, but as an organization we skew very heavily towards favoring simple Haskell, at least at the interface level that other developers need to use to interact with a system.
So yeah, basically: Web Request -> Handler -> Do some DB queries -> Fire off some async work.
We also have risk analysis, cron jobs, batch processing systems that use the same DB and so forth.
We're starting to feel a little more pain around maybe not having enough abstraction though. Right now pretty much any developer can write SQL queries against any tables in the system, so it makes it harder for other teams to evolve the schema sometimes.
For SQL, we use a library called esqueleto, which lets us write SQL in a typesafe way, and we can export fragments of SQL for other developers to join across tables in a way that's reusable:
select $
from $ \(p1 `InnerJoin` f `InnerJoin` p2) -> do
on (p2 ^. PersonId ==. f ^. FollowFollowed)
on (p1 ^. PersonId ==. f ^. FollowFollower)
return (p1, f, p2)
which generates this SQL:
SELECT P1., Follow., P2.*
FROM Person AS P1
INNER JOIN Follow ON P1.id = Follow.follower
INNER JOIN Person AS P2 ON P2.id = Follow.followed
^ It's totally possible to make subqueries, join predicates, etc. reusable with esqueleto so that other teams get at data in a blessed way, but the struggle is mostly just that the other developers don't always know where to look for the utility so they end up reinventing it.
In the end, I guess I'd assert that discoverability is the trickier component for developers currently.
I lost my 3 1/2 year old daughter to sudden illness about 10 months ago. Be gentle to yourself and your family. There will be times where you aren’t actively feeling the grief, but they pull you into theirs or vice versa. There will be times where your love and grief for your lost child will make it easy to forget to cherish the loved ones in front of you.
As you figure out how to live life from here– may you find a path forward that is healthy, loving, and beneficial for you and those you care about.
The writing has been on the wall for 5+ years that government bodies would eventually legislate this, even for casual observers. If these auto manufacturers as industry insiders couldn’t plan ahead to handle this outcome, it sounds like they might deserve to be unseated.
The politicians often change or reverse laws, should they plan based on the law or the consumers. Personally, I think it quite likely that all these government deadlines for ICE will not be met and the changeover will take much longer than hoped. We don't even know if the manufactures can make enough EVs in these time frames, we still have hurdles for battery and electricity distribution. All that energy in liquid form has to now go ever wires, which are showing strains already. Imagine the entire vehicle fleet needing to feed off that same system, we need to put a ton of resources into the grid
Your comment reminds me of the Big Three's attitude towards automobile emissions and safety regulations. They thought they could force the politicians to reverse those laws. And they were very wrong about that. Even when they got their business friendly allies in power, nope.
What's happening is that aspirations are meeting reality. If the table stakes were in fact "oh so much higher", the people would be demanding the politicians to do something about the infra. There is "oh so much more" to do besides just the cars. Electrical grid capacity, fire departments at many more car crashes for battery fires, at home charging (but what about all those that park on the streets), battery production and disposal, the list goes on. It's not just the car makers, we the people need to stop making them the only target. How come we don't hear about the other stuff on HN?
Point is, the politicians can set some year because that gets them votes, but as has been predicted and is playing out, things are very behind schedule and won't make the dealline
Neither can the current fossil fuel industry. Either through explicit subsidies, or through implicit ones by allowing them to use the environment as their free sewer.
While that would work as an estimate of total sq meters, applying that scale in a localized area would have too much of an impact on wind/currents, inversion, and the downstream of those changes.
First targets would be urban heat islands and airports. Second would be industrial and manufacturing facilities. Regulations and Nimby would likely mean manufacturing and industrial would be more quick to adapt for use.
The case for urban heat islands is pretty straightforward, but the application with airports would be to mitigate the heat trapping that takes place, especially overnight. Exhaust from flights result in more heat being trapped, similar to urban heat island but due to atmospheric effects and not solely due to surface materials and lack of greenspace.
I'm not really sure why you'd say that OpenAPI isn't a JSON Schema document: there are published JSON Schema files on the official OpenAPI website. See for example:
One using the draft-04 of JSON schema: https://spec.openapis.org/oas/3.0/schema/2024-10-18.html One using the 2020-12 version of JSON schema: https://spec.openapis.org/oas/3.2/schema/2025-09-17.html