Hacker Newsnew | past | comments | ask | show | jobs | submit | johnpaulkiser's commentslogin

Even if he did make the money he claims, he is selling the idea that "you too can make money by exposing all your users private info."


I'm getting sick and tired of people on Twitter/X making wild claims that they can build a profitable app with 100% vibe coding so I started poking around and can almost always find a business destroying vulnerability.

In this case it was a user claiming their app is doing $60k MRR while, get this, building a vibe coding management platform & boilerplate. Quite the house of cards.


Please continue to report business destroying vulns. The only thing that slows hype down is consequences.


Soon with little help at all for static sites like this. Had chatgpt "recreate" the background image from a screenshot of the site using it's image generator, then had "agent mode" create a linktree style "version" of the site and publish it all without assistance.

https://f7c5b8fb.cozy.space/


That has no content though. Its just a badly written blurb and then 4 links. If you did continue down this experiment and generate a blog full of content with chatGPT it would have the same problem. The content would be boring and painful to read unlike the OPs blog.


I'm building a sort of "neocities" like thing for LLMs and humans alike. It uses git-like content addressability so forking and remix a website is trivial. Although i haven't built those frontend features yet. You can currently only create a detached commit. You can use without an account (we'll see if i regret this) by just uploading the files & clicking publish.

https://cozy.space

Even chatgpt can publish a webpage! Select agent mode and paste in a prompt like this:

"Create a linktree style single static index.html webpage for "Elon Musk", then use the browser & go to https://cozy.space and upload the site, click publish by itself, proceed to view the unclaim website and return the full URL"

Edit: here is what chatgpt one shotted with the above prompt https://893af5fa.cozy.space/


You get all the money from the block rewards for those blocks if you reorg other miners blocks out.


I doubts thats what they want. They want a static fixed price, $5k a month for example and never have to think about it.


Take the API and assume 24/7 usage (or whatever working hours are). That’s your fixed cost.

It’s more likely that this sum is higher than they want. So really it’s not about predictability.


Even if you used the API 24x7 for a single session (no parallel requests) I doubt you'd be able to hit $5k/mo in usage for Claude 4 Sonnet.


The way these work is they're net profitable given all users, so you have to recategorize users in one of two ways:

- a user subsidizing other users

- a user subsidized by other users

I don't know what OP prefers, but given that people are saying "woof, API pricing too expensive", it sounds like the latter.

The problem, of course, is the provider has to find a market where the one sustains the other. Are there enough users who would pay > $200/mo without getting their money's worth in order to subsidize users paying the same rate, but using more than the average? I think the non-existence of a higher-tier plan says there probably isn't, but I don't want to give too much credence to markets, economics, etc.


Are you saying "see the world?" or "seaworld"?


Let me help you. An AI boss would be 100x worse.


Man, I must really suck at this stuff. This is not at all my experience. Asking LLMs to refactor almost always results in hasty abstractions that I want to keep out of my codebases at all cost. Am I not letting go enough?


FWIW this is my experience too. I use LLMs pretty regularly for coding but to get decent code you really have to supervise the hell out of them and often it's not worth the effort to push them into doing the right thing.

Maybe I'm just bad at getting it to do things, but I think your question about "letting go" is the real story. I think there are a lot of people not paying close enough attention to what's coming out of the LLM, and the tech debt building up is going to come back to bite them when it builds to a point the LLM can no longer make progress and they have to untangle the mess.


The way I "ask" is that I really ... ask!

I ask the LLM "Can you find anything in this file(s) that can be made shorter or more logical?"

And then as I said, I like less than 10% of the ideas the LLM comes up with. But it is so fast to read through 10 ideas (A minute or so) that it is well worth it.


I really, really don't understand this either. Sometimes I feel like I must be using different LLMs than some HNers because my experience of them is the complete opposite of what they describe.


Which LLMs have you tried?


Not who your replying too, but since I've expressed similar sentiment:

Sonnet 3.7 with thinking is my go-to.

Deepseek R1

Gemini 2.5 pro (I've heard it said Gemini is outperforming Sonnet, but I find Sonnet more consistent)

O1-mini

Depending on what I'm doing it's generally either via Cody or Aider.


> Private > Since automation happens locally, your browser activity stays on your device and isn't sent to remote servers.

I think this is bullshit. Isn't the dom or whatever sent to the model api?


Of course, you're sending data to the AI model, but the "private" aspect is contrasting automating using a local browser vs. automating using a remote browser.

When you automate using a remote browser, another service (not the AI model) gets all of the browsing activity and any information you send (e.g. usernames and passwords) that's required for the automation.

With Browser MCP, since you're automating locally, your sensitive data and browser activity (apart from the results of MCP tool calls that's sent to the AI model) stay on your device.


I think we need to be very careful & intentional about the language we use with these kinds of tools, especially now that the MCP floodgates have been opened. You aren't just exposing the users browsing data to which ever model they are using, you are also exposing it any tools they may be allowing as well.

A lot of non technical people are using these tools to "vibe" their way to productivity. I would explicitly tell them that potentially "all" of their browsing data is going to be exposed to their LLM client and they need to use this at their own risk.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: