Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is deterministic, it is validating the response using a JSON Schema validator and refusing to pass it to an LLM inference.

I can't gaurantee that behavior will remain the same more than any other software. But all this happens before the LLM is even involved.

> The whole point of it is, whichever LLM you're using is already too dumb to not trip when lacing its own shoes. Why you'd trust it to reliably and properly parse input badly described by a terrible format is beyond me.

You are describing why MCP supports JSON Schema. It requires parsing & validating the input using deterministic software, not LLMs.



>This is deterministic, it is validating the response using a JSON Schema validator and refusing to pass it to an LLM inference.

No. It is not. You are still misunderstanding how this works. It is "choosing" to pass this to a validator or some other tool, _for now_. As a matter of pure statistics, it will simply not do this at some point in the future on some run.

It is inevitable.


I'd encourage you to read the MCP specification: https://modelcontextprotocol.io/specification/2025-06-18/ser...

Or write a simple MCP server and a client that uses it. FastMCP is easy: https://gofastmcp.com/getting-started/quickstart

You are quite wrong. The LLM "chooses" to use a tool, but the input (provided by the LLM) is validated with JSON Schema by the server, and the output is validated by the client (Claude Code). The output is not provided back to the LLM if it does not comply with the JSON Schema, instead an error is surfaced.


> The LLM "chooses" to use a tool

I think the others are trying to point out that statistically speaking, in at least one run the LLM might do something other than choose to use the correct tool. i.e 1 out of (say) 1 million runs it might do something else


No, the discussion is about whether validation is certain to happen when the LLM makes something where the frontend recognizes aa a tool request and calls a tool on behalf of the LLM, not whether the LLM can choose not to make a tool call at all.

The question is whether havign observed Claude Code validating a tool response before handing the response back to the LLM, you can count on that validation on future calls, not whether you can count on the LLM calling a tool in a similar situation.


Why do you think anything you said contradicts what I'm saying? I promise you I'm probably far more experienced in this field than you are.

>The LLM "chooses" to use a tool

Take a minute to just repeat this a few times.


MCP requires that servers providing tools must deterministically validate tool inputs and outputs against the schema.

LLMs cannot decide to skip this validation. They can only decide not to call the tool.

So is your criticism that MCP doesn't specify if and when tools are called? If so then you are essentially asking for a massive expansion of MCP's scope to turn it into an orchestration or workflow platform.


The LLM chooses to call a tool, it doesn't choose how the frontend handles anything about that call between the LLM making a tool request and the frontend, after having done its processing of the response (including any validation), mapping the result into a new prompt and calling the LLM with it.


> . It is "choosing" to pass this to a validator or some other tool, _for now_.

No, its not. The validation happens at the frontend before the LLM sees the response. There is no way for the LLM to choose anything about what happens.

The cool thing about having coded a basic ReAct pattern implementation (before MCP, or even models trained on any specific prompt format for tool calls, was a thing, but none of that impacts the basic pattern) is that it gives a pretty visceral understanding of what is going on here, and all that's changed since is per model standardization of prompt and response patterns on the frontend<->LLM side and, with MCP, of the protocol for interacting on the frontend<->tool side.


Claude Code isn't a pure LLM, it's a regular software program that calls out to an LLM with an API. The LLM is not making any decisions about validation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: