Tokens are fine grained billable attribute that lets you add micro transactions to your service.
Not in all cases, but in many we exist in a complicated world of enshitifcation + inflation.
Inflation means you need to somehow make more money.
You can either: raise prices (unpopular), make your product cheaper (unpopular) or add new features and rise the price on the basis of “new value!”.
You see major organisations doing this: same product, but now with ai! …and it’s more expensive. Or it’s a mandatory bundle. Or it’s “premium”.
Long storm short, a lot of companies see the way that cloud providers do billing (usage based billing, no caps, you get the bill after using it) as the ideal end state.
Token based billing moves towards that world; which isn’t just “profit!” …it’s companies trying to deal with the reality of a complicated market place that will punish them for raising prices.
…and it is bad. I’m just saying that it’s kind of naive to think so many companies are doing this just as a “me tooooo!”. Come on; even if you’re hunting a funding round, the people running these companies are (mostly) not complete idiots.
No one is adding AI features because it’s fun, or they’re bored.
…
…ok, there are some idiots. Most people have a bigger vision for these features than just annoying their users.
> We argue that systematic problem solving is vital and call for rigorous assurance of such capability in AI models. Specifically, we provide an argument that structureless wandering will cause exponential performance deterioration as the problem complexity grows, while it might be an acceptable way of reasoning for easy problems with small solution spaces.
Ie. thinking harder still samples randomly from the solution spaces.
You can allocate more compute to the “thinking step”, but they are arguing that for problems with a very big solution space, adding more compute is never going to find a solution, because you’re just sampling randomly.
…and that it only works for simple problems because if you just randomly pick some crap from a tiny distribution you’re pretty likely to find a solution pretty quickly.
I dunno. The key here is that this is entirely model inference side. I feel like agents can help contain the solution space for complex problems with procedural tool calling.
So… dunno. I feel kind “eh, whatever” about the result.
If someone opens a PR to one of my repos with no context, I ban them.
There’s too much AI spam out there right now.
Publishing ‘@provenance-labs/lodash’ as a test, I suppose. Ok. Leaving it up? Looks like spam.
Badgering the author an a private email? Mmm. Definitely not.
This isn’t a bug, it’s a feature. There’s a contributing guide which clearly says; unless a feature gets community interest, it’s not happening. If you want a feature, talk about it rouse community interest.
Overall: maybe this wasn’t the right way to engage.
Sometimes you just have to walk away from these situations, because the harder you chase, the more it looks like you’re in the wrong.
…it certainly looks, right now, like the lodash author wasn’t out of line with this, to me.
> Overall: maybe this wasn’t the right way to engage
Lex Livingroom. If you are among friends you can surly criticize a sweater, but if you come barging in uninvited and criticize the same sweater, you're in for a bad time.
Anyone seriously using these tools knows that context engineering and detailed specific prompting is the way to be effective with agent coding.
Just take it to the extreme and youll see; what if you auto complete from a single word? A single character?
The system youre using is increasingly generating some random output instead of what you were either a) trying to do, or b) told to do.
Its funny because its like,
“How can we make vibe coding even worse?”
“…I know, lets just generate random code from random prompts”
There have been multiple recent posts about how to direct agents using a combination of planning step, context summary/packing, etc to craft detailed prompts that agents can effectively action on large code bases.
…or yeah, just hit tab and go make a coffee. Yolo.
This could have been a killer feature about using a research step to enhance a user prompt and turn it into a super prompt; but it isnt.
What’s wrong with autocompleting the prompt? There exists entropy even in the English language and especially in the prompts we feed to the llms. If I write something like “fix the ab..” and it autocompletes to AbstractBeanFactory based on the context, isn’t it useful?
Admittedly, more detail would be better, but this high-level stuff is mostly the level that engineering leaders are discussing this topic currently (and it is by far the most discussed topic).
They actually revelead an interesting tidbit where they are with AI adoption and how they are positioning it now to new hires, e.g. "we made AI fluency a baseline expectation for engineers by adding it to job descriptions and hiring expectations".
It seems inevitable now that engineering teams will demand AI fluency when hiring, cuious though what they are doing with their existing staff who refuse to adopt AI into their workflow. Curious also if they mandated it or relied solely on incentives to adopt.
This was just our first post FWIW, and we definitely want to follow up with more concrete demos/details/etc here. I am working on another post specifically about how we leverage our internal RPC system to make adding AI tools super easy so expect more from us.
To be fair, if you read the incident report it is a better than average one on details and it was a 20 minute outage without data loss. I've seen many major companies simply not acknowledge that level of outage on their public status page, especially lately
Anyway… watch the videos the OP has of the coding live streams. Thats the most interesting part of this post: actual real examples of people really using these tools in a way that is transferable and specifically detailed enough to copy and do yourself.
I unironically look forward to the world where this is solved by unsupervised AI agents incrementally upgrade these apps to keep them evergreen...
...and the Lovecraftian gradual drift as incremental recursive hallucinations turn them into still... mostly working... strange little app-like-bundles of Something Weird.
I don't know why I have to take a selfie of myself to start my washing machine. I also don't know why it requires me to stare at it for 30 seconds afterward, or the machine shuts off. The face is my own, for the first 15 seconds or so, but then it's not. I've checked, it's a pixel perfect copy, it's not being slowly adjusted as I watch it, but for the rest of the day, the face I see in the mirror isn't my own, either.
Tokens are fine grained billable attribute that lets you add micro transactions to your service.
Not in all cases, but in many we exist in a complicated world of enshitifcation + inflation.
Inflation means you need to somehow make more money.
You can either: raise prices (unpopular), make your product cheaper (unpopular) or add new features and rise the price on the basis of “new value!”.
You see major organisations doing this: same product, but now with ai! …and it’s more expensive. Or it’s a mandatory bundle. Or it’s “premium”.
Long storm short, a lot of companies see the way that cloud providers do billing (usage based billing, no caps, you get the bill after using it) as the ideal end state.
Token based billing moves towards that world; which isn’t just “profit!” …it’s companies trying to deal with the reality of a complicated market place that will punish them for raising prices.
…and it is bad. I’m just saying that it’s kind of naive to think so many companies are doing this just as a “me tooooo!”. Come on; even if you’re hunting a funding round, the people running these companies are (mostly) not complete idiots.
No one is adding AI features because it’s fun, or they’re bored.
…
…ok, there are some idiots. Most people have a bigger vision for these features than just annoying their users.
reply