Hacker Newsnew | past | comments | ask | show | jobs | submit | stanleydrew's commentslogin

> Still, my point is that my parking space isn't actually mine, so I can't modify anything in the garage.

Presumably over time shared parking areas will get upgraded with charging infrastructure to keep attracting tenants.


The housing and rental markets currently favour owners/landlords significantly and it's not looking like slowing down. I have zero hope that "charging infrastructure" will be installed to "attract tenants".

Here in Australia landlords seem to struggle with basic things like insulation or a split system aircon.


> idk why we need MCP servers when LLMs can just connect to the existing API endpoint

Because the LLM can't "just connect" to an existing API endpoint. It can produce input parameters for an API call, but you still need to implement the calling code. Implementing calling code for every API you want to offer the LLM is at minimum very annoying and often error-prone.

MCP provides a consistent calling implementation that only needs to be written once.


yupp that's what UTCP does as well, standardizing the tool-calling

(without needing an MCP server that adds extra security vulnerabilities)


There's still an agent between the user and the LLM. The model isn't making the tool calls and has no mechanism of its own to accomplish this.


heh, relevant to the "do what now?" thread, I didn't recognize that initialism https://github.com/universal-tool-calling-protocol

I'll spare the audience the implied XKCD link


Technically it's not really much different from just giving the LLM an OpenAPI spec.

The actual thing that's different is that an OpenAPI spec is meant to be an exhaustive list of every endpoint and every parameter you could ever use. Whereas an MCP server, as a proxy to an API, tends to offer a curated set of tools and might even compose multiple API calls into a single tool.


It's a farce, though. We're told these LLMs can already perform our jobs, so why should they need something curated? A human developer often gets given a dump of information (or nothing at all), and has to figure out what works and what is important.


You should try and untangle what you read online about LLMs from the actual technical discussion that's taking place here.

Everyone in this thread is aware that LLMs aren't performing our jobs.


> I have a silly theory that I only half joke about that docker/containers wouldn't've ever taken off as fast as it did if it didn't solve the horrible python dependency hell so well.

I don't think this is a silly theory at all. The only possibly silly part is that containers specifically helped solve this problem just for python. Lots of other software systems built with other languages have "dependency hell."


Back in the early days of Redhat, rpm's didn't really have good dependency management. Yes there were rpms, yes you could download them, but getting the full dep tree was a PITA. Most people installed the full Linux distro rather than a lightweight version because of this.

Debian's apt-get was very "apt" at the time when it came out. It solved the entire issue for Debian. There was a point at which there was an apt-rpm for redhat. Yum tried to solve it for redhat, but didn't really work that well -- particularly if you needed to pin packages to certain versions.


> Is this just junior devs who never learned it

Seems more like it's fallen out of favor with senior devs who have moved to Go/Rust.


I don’t know anything about go. But Rust is more of a competitor to C and C++, right? It is sort of bizarre if these languages are butting heads with a scripting language like Python.

Python compares fairly well to Bash or JavaScript or whatever, right? (Maybe JavaScript is better, I don’t know anything about it).


Rust has language features (often inspired by functional programming languages) that allow you to write pretty high level code.


I consider this self-inflicted. Pythons Async functionality is … unfortunate.

JavaScript has much more intuitive async syntax, which was actually borrowed from a Python framework.

For whatever reasons, the Python folks decided not to build on what they had, and reinvents things from scratch.


you'd be crazy (senior or not) not to use Go for Go stuff and Python for Python stuff


I use both, with a preference for Go but I feel like I should be doing more Python just to keep it fresh.

It seems like two of the main entries under “Python stuff” are “working with people who only know Python” and “AI/ML because of available packages.”

What are some others?


I mean... to oversimplify a bit, Python is for scripting and Go is for servers.


What risk do you foresee arising out of perverse incentives in this case?


Changing license terms, aggressive changes to the API to disallow competition, horrendous user experience that requires a support contract. I really don't think there's a limit to what I've seen other companies do. I generally trust libraries that competitors are maintaining jointly since there is an incentive toward not undercutting anyone.


Also means you're not having to do a bunch of isolation work to make the server-side execution environment safe.


This is the real value here. Keeping a secure environment to run untrusted code along side user data is a real liability for them. It's not their core competency either, so they can just lean on browser sandboxing and not worry about it.


How is doing it server side a different challenge than something like google collab or any of those Jupyter notebook type services?


Shared resources and multitenancy are how you get efficiency and density. Those are at direct odds with strict security boundaries. IME you need hardware supported virtualization for consistent security boundary of arbitrary compute. Linux namespaces (“containers”) and language runtime isolation are not it for critical workloads, see some of the early aws nitro/firecracker works for more details. I _assume_ the cases you mentioned may be more constrained, or actually backed by VM partitions per customer.


Google Collab are all individual VMs. It seems Anthropic doesn’t want to be in the “host a VM for every single user” business.


One of the design principles of sqlc is that SQL queries should be static in application code so that you know exactly what SQL is running on your database. It turns out you can get pretty far operating under this constraint, although there are some annoyances.


Riza, Inc. (https://riza.io) | SWEs and DevRel Engineers | Full-time or part-time | San Francisco

We use WASM to provide isolated runtimes for executing untrusted code, mostly generated by LLMs. Our customers do things like extract data from log lines at run time by asking claude-3-5-sonnet to generate a parsing function on-the-fly and then sending it to us for execution.

Things we need help with:

* Our janky account management dashboard (Postgres / Go / React / TypeScript)

* Our hosted and self-hosted runtime service (Rust, WASM)

* Integrations and demos with adjacent frameworks and tools (Python / JavaScript / TypeScript)

* New products

We have seed money, but the whole company is currently just me and Kyle working out of a converted warehouse on Alabama St. We’re second-time founders, so we know the risk we’re asking you to take and we’re prepared to compensate accordingly. Send an email to me at andrew at riza dot io or pop in our Discord (https://discord.gg/4P6PUeJFW5) and say hi.


Hi,

Are there any opportunities for developers with no experience but great skills ?


Why do we have to "get there?" Humans use calculators all the time, so why not have every LLM hooked up to a calculator or code interpreter as a tool to use in these exact situations?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: