Hi HN — we're the team behind Arch (an open-source edge and service proxy for agents)[1], and today we're releasing Arch-Router (https://huggingface.co/katanemo/Arch-Router-1.5B), a 1.5B LLM router model designed to align to user-defined preferences, not public benchmarks and leader boards.
As teams integrate multiple LLMs - each with different strengths, styles, or cost/latency profiles — routing the right prompt to the right model becomes a critical part of the application design. But it's still an open problem. Most routing systems fall into two camps:
- Embedding-based routers use intent classifiers — label a prompt as “support,” “SQL,” or “math,” then route to a matching model. This works for simple tasks but breaks down in real conversations. Users shift topics mid-conversation, task boundaries blur, and product changes require retraining classifiers.
- Performance-based routers pick models based on benchmarks like MMLU or MT-Bench, or based on latency or cost curves. But benchmarks often can't capture what matters in production: domain-specific quality or subjective evaluation criteria. These routers are often opaque, difficult to debug, and their quality judgments can feel arbitrary, failing to capture the subjective nuance of what a “good” response actually means for a specific user’s intent.
Arch-Router takes a different approach: route to LLMs based on preferences written as policies in plain ol English.
You write policies like “contract clauses → GPT-4o” or “quick travel tips → Gemini Flash.” The router maps the prompt (and the full conversation context) to those policies using a lightweight 1.5B auto-regressive model. The model is capable to handle intent drift, supports multi-turn conversations, and lets you swap in or out models with a one-line change to the routing policy. To read more about the strength of our model, check out our research paper here: https://arxiv.org/abs/2506.16655
Essentially, Arch-Router splits the routing process into two distinct parts:
Route Selection: This is the what. The system defines a set of human-readable routing policies using a “Domain-Action Taxonomy.” Think of it as a clear API contract written in plain English. A policy isn’t just intent_123; it’s a descriptive label like Domain: ‘finance’, Action: ‘analyze earnings report’. The router’s only job is to match the user’s query to the best-fit policy description.
Model Assignment: This is the how. A separate, simple mapping configuration connects each policy to a specific LLM. The finance/"analyze earnings report" policy might map to a powerful model like GPT-4o, while a simpler general/"greeting" policy maps to a faster, cheaper model.
Specs:
- 1.5B params — runs on a single GPU (or CPU for testing)
- No retraining needed — point it at any mix of LLMs
- Outperforms larger closed models on our conversational routing benchmarks (details in the paper)
Is model choice a core design consideration. Or will Kiro not let developers chose the underlying model?
What if I want to set preferences for the underlying LLM for different usage scenarios? For example, for a quick and snappy understanding of a single file id want to use a fast model that doesn't cost me an arm and a leg. Recent research on preference-aligned LLM routing here: https://arxiv.org/abs/2506.16655
There are a few critical differences. archgw is designed as a data plane for agents - handling and processing ingress and egress (prompt) traffic to/from agents. Unlike frameworks or libraries, it runs as a single process that includes edge functionality and task-specific LLMs, tightly integrated to reduce latency and complexity.
Second, it’s about where the project is headed. Because archgw is built as a proxy server for agents, it’s designed to support emerging low-level protocols like A2A and MCP in a consistent, unified way—so developers can focus purely on high-level agent logic. This borrows from the same design decision that made Envoy successful for microservices: offload infrastructure concerns to a specialized layer, and keep application code clean. In our next big release, you will be able to run archgw as a sidecar proxy for improved orchestration and observability of agents. Something that other projects just won't be able to do.
Kong was designed for APIs. Envoy was built for microservices. Arch is built for agents.
MCP implementation is trivial - I agree. But A2A will require a mesh like structure. Meaning its not just about north/south traffic. It will be about east/west traffic as agents coordinate with each other. That communication and coordination among agents will need to be robust and that's where a sidecar proxy built on top of Envoy will offer certain properties in a first-class way that Kong can't easily support today.
This was the insight behind Envoy's initial design. Handle north/south and east/west traffic equally well as a universal data plane.
fwiw, if I were evaluating these proxies against each other, I would be intrigued by the solution built by people from the Envoy team. Envoy is great software and I’m sure there are many lessons you took from building it.
It looks like you’re even building on Envoy as the foundation for the system which just makes it more compelling.
Its a core dependency for rate limiting, traffic shaping, fail over detection. Its cluster subsystem is super convenient for local LLM calls too. We'll write up a blog on the lessons because there were many. For example, for intelligent routing decisions we can't create an upstream connection to a cluster based on route paths or host - Envoy forces a more static binding. This doesn't work when you are making decisions about a prompt and have to inject more dynamic flow control.
We’re using proxy-wasm and compiling to wasm32-wasip1, then mounting the .wasm binaries into Envoy as HTTP filters via envoy.filters.http.wasm. The line you're referring to:
…is where the integration happens. There's no need to modify envoy.bootstrap.wasm; instead, Arch loads the WASM modules at runtime using standard Envoy config templating. The filters (prompt_gateway for ingress, and llm_gateway for egress sit in the request path and do things like prompt inspection, model routing, header rewrites, and telemetry collection.
What’s missing right now are our guides showing how well ArchGW integrates with existing frameworks and tools. But the core idea is simple: it offloads low-level responsibilities—like routing, safety, and observability—that frameworks like LangChain currently try to handle inside the app. That means less bloat and more clarity in your agent logic.
And importantly, some things just can’t be done well in a framework. For example, enforcing global rate limits across LLMs isn’t realistic when each agent instance holds its own local state. That kind of cross-cutting concern needs to live in infrastructure—not in application code.
As teams integrate multiple LLMs - each with different strengths, styles, or cost/latency profiles — routing the right prompt to the right model becomes a critical part of the application design. But it's still an open problem. Most routing systems fall into two camps:
- Embedding-based routers use intent classifiers — label a prompt as “support,” “SQL,” or “math,” then route to a matching model. This works for simple tasks but breaks down in real conversations. Users shift topics mid-conversation, task boundaries blur, and product changes require retraining classifiers.
- Performance-based routers pick models based on benchmarks like MMLU or MT-Bench, or based on latency or cost curves. But benchmarks often can't capture what matters in production: domain-specific quality or subjective evaluation criteria. These routers are often opaque, difficult to debug, and their quality judgments can feel arbitrary, failing to capture the subjective nuance of what a “good” response actually means for a specific user’s intent.
Arch-Router takes a different approach: route to LLMs based on preferences written as policies in plain ol English.
You write policies like “contract clauses → GPT-4o” or “quick travel tips → Gemini Flash.” The router maps the prompt (and the full conversation context) to those policies using a lightweight 1.5B auto-regressive model. The model is capable to handle intent drift, supports multi-turn conversations, and lets you swap in or out models with a one-line change to the routing policy. To read more about the strength of our model, check out our research paper here: https://arxiv.org/abs/2506.16655
Essentially, Arch-Router splits the routing process into two distinct parts:
Specs:- 1.5B params — runs on a single GPU (or CPU for testing)
- No retraining needed — point it at any mix of LLMs
- Outperforms larger closed models on our conversational routing benchmarks (details in the paper)
Links:
[1] Arch Proxy: https://github.com/katanemo/archgw