The housing and rental markets currently favour owners/landlords significantly and it's not looking like slowing down. I have zero hope that "charging infrastructure" will be installed to "attract tenants".
Here in Australia landlords seem to struggle with basic things like insulation or a split system aircon.
> idk why we need MCP servers when LLMs can just connect to the existing API endpoint
Because the LLM can't "just connect" to an existing API endpoint. It can produce input parameters for an API call, but you still need to implement the calling code. Implementing calling code for every API you want to offer the LLM is at minimum very annoying and often error-prone.
MCP provides a consistent calling implementation that only needs to be written once.
Technically it's not really much different from just giving the LLM an OpenAPI spec.
The actual thing that's different is that an OpenAPI spec is meant to be an exhaustive list of every endpoint and every parameter you could ever use. Whereas an MCP server, as a proxy to an API, tends to offer a curated set of tools and might even compose multiple API calls into a single tool.
It's a farce, though. We're told these LLMs can already perform our jobs, so why should they need something curated? A human developer often gets given a dump of information (or nothing at all), and has to figure out what works and what is important.
> I have a silly theory that I only half joke about that docker/containers wouldn't've ever taken off as fast as it did if it didn't solve the horrible python dependency hell so well.
I don't think this is a silly theory at all. The only possibly silly part is that containers specifically helped solve this problem just for python. Lots of other software systems built with other languages have "dependency hell."
Back in the early days of Redhat, rpm's didn't really have good dependency management. Yes there were rpms, yes you could download them, but getting the full dep tree was a PITA. Most people installed the full Linux distro rather than a lightweight version because of this.
Debian's apt-get was very "apt" at the time when it came out. It solved the entire issue for Debian. There was a point at which there was an apt-rpm for redhat. Yum tried to solve it for redhat, but didn't really work that well -- particularly if you needed to pin packages to certain versions.
I don’t know anything about go. But Rust is more of a competitor to C and C++, right? It is sort of bizarre if these languages are butting heads with a scripting language like Python.
Python compares fairly well to Bash or JavaScript or whatever, right? (Maybe JavaScript is better, I don’t know anything about it).
Changing license terms, aggressive changes to the API to disallow competition, horrendous user experience that requires a support contract. I really don't think there's a limit to what I've seen other companies do. I generally trust libraries that competitors are maintaining jointly since there is an incentive toward not undercutting anyone.
This is the real value here. Keeping a secure environment to run untrusted code along side user data is a real liability for them. It's not their core competency either, so they can just lean on browser sandboxing and not worry about it.
Shared resources and multitenancy are how you get efficiency and density. Those are at direct odds with strict security boundaries. IME you need hardware supported virtualization for consistent security boundary of arbitrary compute. Linux namespaces (“containers”) and language runtime isolation are not it for critical workloads, see some of the early aws nitro/firecracker works for more details. I _assume_ the cases you mentioned may be more constrained, or actually backed by VM partitions per customer.
One of the design principles of sqlc is that SQL queries should be static in application code so that you know exactly what SQL is running on your database. It turns out you can get pretty far operating under this constraint, although there are some annoyances.
Riza, Inc. (https://riza.io) | SWEs and DevRel Engineers | Full-time or part-time | San Francisco
We use WASM to provide isolated runtimes for executing untrusted code, mostly generated by LLMs. Our customers do things like extract data from log lines at run time by asking claude-3-5-sonnet to generate a parsing function on-the-fly and then sending it to us for execution.
* Our hosted and self-hosted runtime service (Rust, WASM)
* Integrations and demos with adjacent frameworks and tools (Python / JavaScript / TypeScript)
* New products
We have seed money, but the whole company is currently just me and Kyle working out of a converted warehouse on Alabama St. We’re second-time founders, so we know the risk we’re asking you to take and we’re prepared to compensate accordingly. Send an email to me at andrew at riza dot io or pop in our Discord (https://discord.gg/4P6PUeJFW5) and say hi.
Why do we have to "get there?" Humans use calculators all the time, so why not have every LLM hooked up to a calculator or code interpreter as a tool to use in these exact situations?
Presumably over time shared parking areas will get upgraded with charging infrastructure to keep attracting tenants.