Hacker Newsnew | past | comments | ask | show | jobs | submit | Schiendelman's commentslogin

There are capsule hotels in New York as well!

Really! Did not know that.

There's basically no evidence that big companies own enough property to matter in the market. Even if every individual unit was owned by a different person, you'd see exactly the same outcomes we do today.

This belief is probably causing the housing crisis.

Unfortunately street legality requires testing, not just compliance.

First you have to pick one thing to work on. Just one. Then you have to find all the people who agree with you that ONE thing is the most important thing to pick, and get help on crafting a message that will stick, develop a theory of change, and work backwards from that theory to actionable efforts the people you recruit can do.

It's hard work. I've done it. I am happy to help you do it. Let me know.


I hesitate because there's not a lot of data out there. Are those being measured the same way? Are they even real measurements and not just repeated talking points...?

EVs generate far, far less brake pad dust, most of their braking is regeneration via the motors.

Tires get more efficient every year, dust has reduced as the companies compete to make them last longer, and we're finally seeing the tire industry respond to pressure to reduce toxic runoff. Michelin's been removing phenols, for instance: https://resicare.michelin.com/news/michelin-resicare-resin-1...


So how do we do that? Is some organization working on it with a plausible theory of change?

The phone rulings came from court cases. So sadly it has to reach a case, an in the meantime other folks are hurt with no recourse.

But once in court, you would probably get that thrown out. The key problem is that we haven't instituted consequences for that sort of police behavior.

They did not ticket me so there is no day in court. Chatting you up, seeing everything visible through the windows, leaning in to smell your car, running your license for warrants are all "free" interactions with no oversight.

The fun doesnt stop there, check out 'civil asset forfeiture' when you have a chance.

Also, if you read TFA, it seemed like the owner of a truck and trailer had to spend $20k getting his stuff out of impound when his employee was wrongly arrested. Seems like an innocent judgement isnt everything we think it is.


> Seems like an innocent judgement isnt everything we think it is.

The State of Florida will charge you $75/day for your incarceration, even if charges are dropped, dismissed or you are found not guilty.

Not paying these fees is a Class C Felony in Florida, punishable by up to 10y in prison and/or a $10,000 fine.


If you go to court, pay a lawyer for the hours for it, instead of pleading down. In many cases you have already lost just based on the accusation.

That’s if you get to go to court. ICE makes mistakes and I doubt any of their detainees get due process.

Maybe this is a dumb question, but isn't this solved by publishing good API docs, and then pointing the LLM to those docs as a training resource?


>but isn't this solved by publishing good API docs, and then pointing the LLM to those docs as a training resource?

Yes.

It's not a dumb question. The situation is so dumb you feel like an idiot for asking the obvious question. But it's the right question to ask.

Also you don't need to "train" the LLM on those resources. All major models have function / tool calling built in. Either create your own readme.txt with extra context or, if it's possible, update the API's with more "descriptive" metadata (aka something like swagger) to help the LLM understand how to use the API.


You keep saying that major models have "tool calling built in". And that by giving them context about available APIs, the LLM can "use the API".

But you don't explain, in any of your comments, precisely how an LLM in practice is able to itself invoke an API function. Could you explain how?

A model is typically distributed as a set of parameters, interpreted by an inference framework (such as llama.cpp), and not as a standalone application that understands how to invoke external functions.

So I am very keen to understand how these "major models" would invoke a function in the absence of a chassis container application (like Claude Code, that tells the model, via a prompt prefix, what tokens the model should emit to trigger a function, and which on detection of those tokens invokes the function on the model's behalf - which is not at all the same thing as the model invoking the function itself).

Just a high level explanation of how you are saying it works would be most illuminating.


The LLM output differentiates between text output intended for the user to see, vs tool usage.

You might be thinking "but I've never seen any sort of metadata in textual output from LLMs, so how does the client/agent know?"

To which I will ask: when you loaded this page in your browser, did you see any HTML tags, CSS, etc? No. But that's only because your browser read the HTML rendered the page, hiding the markup from you.

Similarly, what the LLM generates looks quite different compared to what you'll see in typical, interactive usage.

See for example: https://platform.openai.com/docs/guides/function-calling

The LLM might generate something like this for text:

    {
      "content": [
        {
          "type": "text",
          "text": "Hello there!"
        }
      ],
      "role": "assistant",
      "stop_reason": "end_turn"
    }
Or this for a tool call:

    {
      "content": [
        {
          "type": "tool_use",
          "id": "toolu_abc123",
          "name": "get_current_weather",
          "input": {
            "location": "Boston, MA"
          }
        }
      ],
      "role": "assistant",
      "stop_reason": "tool_use"
    }
The schema is enforced much like end-user visible structured outputs work -- if you're not familiar, many services will let you constrain the output to validate against a given schema. See for example:

https://simonwillison.net/2025/Feb/28/llm-schemas/

https://platform.openai.com/docs/guides/structured-outputs


It is. Anthropic builds stuff like MCP and skills to try and lock people into their ecosystem. I'm sure they were surprised when MCP totally took off (I know I was).


I don't think there is any attempt at lock in here, it's simply that skills are superior to MCP.

See this previous discussion on "Show HN: Playwright Skill for Claude Code – Less context than playwright-MCP (github.com/lackeyjb)": https://news.ycombinator.com/item?id=45642911

MCP deficiencies are well known:

https://www.anthropic.com/engineering/code-execution-with-mc...

https://blog.cloudflare.com/code-mode/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: