Hacker Newsnew | past | comments | ask | show | jobs | submit | nip's commentslogin

It’s easy to pick on logic that failed and for which you have a very detailed and great post mortem write-up.

Yet you omit to acknowledge that the remaining 99.99999% logic written that powers Cloudflare works flawlessly.

Also, hindsight is 20/20


You are less critical with CF then they are with themselves.

A system that is 99.99999% flawless, can still be unusable.

optimism bias: 100/100


A website was set up to inform and facilitate contacting MEPs: https://fightchatcontrol.eu

Additionally, keep in mind that controversial laws or proposals, at least in France, are often announced or passed during summer vacation when people are away, limiting scrutiny and attention.

Expect to hear more outrage come September


Looks really neat! Starred on GitHub!

If you have heard of pg-boss [1], how does sidequest compare to it? I’m about to embark on some « jobification » on some flows and I’d love to have your opinion (possibly biased, but still)!

[1] https://github.com/timgit/pg-boss


Thanks for the question! I just checked out pg-boss. Solid library if you're fully on Postgres. Sidequest.js takes a broader and more flexible approach. Key differences:

Database agnostic: Sidequest isn't tied to Postgres. It also works with MySQL, MongoDB, and SQLite, which helps if your stack isn’t Postgres-based.

Job isolation: Jobs run in worker threads, so heavy tasks won’t block your main Node.js process. Express APIs stay responsive.

Distributed coordination: Designed to run across multiple instances. One node picks up the job, and if it fails, another can retry. This is built-in.

Built-in dashboard: Includes a web UI to inspect, retry, or cancel jobs.

More than queues: Supports cron jobs, uniqueness constraints, per-queue concurrency, and configuration. Some of this overlaps with pg-boss, but the intent behind Sidequest is to provide a complete solution for job processing.

If you just need simple queues in a Postgres-only setup, pg-boss is great. If you want more flexibility, tooling, and backend options, Sidequest may be a better fit.


Thanks for the thorough reply!

I'm all in with Postgres, but the job isolation + built-in dashboard seem really appealing. I'll definitely give it a try!

Keep up the great work, love to see such high quality codebase / documentation / tooling!


We do something similar that we call « Office a la Zoom »:

Two times a week, the weekly standup is extended by an hour, from 15min to 1h15.

People are welcome to jump in and out of that open zoom that acts as a water cooler corner: any topic goes, from work to personal hobbies, etc

We’re fully remote (US / EMEA / APAC)


Brilliant work!

From another fellow « pdf’er »


Wow, that really means a lot coming from you! I’ve come across SimplePDF quite a few times while researching! Super impressed by what you’ve built. Thanks for the kind words, fellow PDF’er


It’s both in my opinion and discussions can stem from the linked article

Many come to HN also for the comments


> Further, one of the issues with remote servers is tenancy

Excellent write-up and understanding of the current state of MCP

I’ve been waiting for someone to point it out. This is in my opinion the biggest limitation of the current spec.

What is needed is a tool invocation context that is provided at tool invocation time.

Such tool invocation context allows passing information that would allow authorizing, authentication but also tracing the original “requester”: think of it as “tool invoked on behalf of user identity”

This of course implies an upstream authnz that feeds these details and more.

If you’re interested in this topic, my email is in my bio: I’m of the architect of our multi-tenant tool calling implementation that we’ve been running in production for the past year with enterprise customers where authnz and auditability are key requirements.


The way we've solved this in our MCP gateway (OSS) is that the user first needs to authenticate against our gateway, e.g. by creating a valid JWT with their identity provider, which will be validated using JWKS. Now when they use a tool, they must send their JWT, so the LLM always acts in their behalf. This supports multiple tenants out of the box. (https://wundergraph.com/mcp-gateway)


Is this really hard to code?

I mean, converting a tool-less LLM into a tool-using LLM is a few hundred lines of code, and then you can plug all your tools, with whichever context you want.


Indeed very easy to code!

My point is about the need for a spec of this mechanism: without a spec, every company / org will roll out their own and result in 500 flavors of the same concept.

That’s where MCP shines: tool calling and tool discovery is already 1.5 years old (an eternity in ai land).

The MCP spec ensures that we can all focus on solving problems with tool calling rather than wasting time in cobbling together services that re not interoperable (because developed without a common spec / standard)


Sema4.ai | Fullstack Engineer | Remote (EMEA/India) | Full-time | https://sema4.ai

We're building an enterprise platform to develop, deploy, and run AI agents at scale — from local dev to production. Founded by ex-Hortonworks, Docker, and Cloudera folks, Sema4.ai is a well-funded startup with deep infra and OSS roots.

We're looking for a fullstack engineer to join our platform team. You'll help shape the core infrastructure that powers agent-based workflows for enterprises.

Technologies we use include:

- K8s, Helm, OTel

- Node.js, React

- AWS

- Snowflake SPCS

What we're looking for:

- Strong builder mindset — pragmatic, self-driven

- Comfortable in startup environments

- Clear communicator in distributed teams

- Bonus: experience with observability / infra / developer platforms

If you're interested, reach out to ben@sema4.ai with a short intro and (ideally) your GitHub or past projects.


Really surprised by some of the comments here: a mix of hatred, ignorance and lack of foresight

This is a great way to:

- Have a fail-safe / fallback in case the software does not work as intended

- Allow them to continue training the already immense dataset using real data (as opposed to synthetic data)


I don’t want to sit in a taxi that’s remote controlled by someone who has low stakes (their body not in the way of harm). There will be a number of accidents attached to the self driving mechanism, so I can make an informed decision, but there won’t be such a number for the person taking over remotely.


Really? You're surprised that I don't want to get in a vehicle driven by Greg in Florida while he's scratching his nuts? ... Or Javinder in India while he's got a side eye on how his cricket bet is going?

Count me out. This is a laughable version of the promised vision.


You'd think that if they can't get a reliable self driving car given how much training data they must currently have, perhaps they need to try a different strategy.


More compute.


Yes I learned that too a while ago. If you say anything slightly positive about Tesla, you'll be downvoted into oblivion. I can't wait for the day for this US/Western tribalism to cool down and we can have substantive discussions again.


I’ll try:

Returning a “Result” with union discrimination on the “success” is far superior to throwing in Typescript in my experience

   { success: false, reason: “user_already_exists” } | { success: true, data: { user_id: string }
- By design you are forced to “deal” with the sad path: in order to get to the data, you must handle the error state

- Throwing is not type-safe: you cannot know at build time whether you are handling all thrown errors


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: