Hacker Newsnew | past | comments | ask | show | jobs | submit | JasonSage's commentslogin

ChatGPT recommended this to me recently when I was trying to get some assistance with a usable Tailwind palette. I ended up not needing it right away but it's first in line next time I need to make one.

Any details about what problems you were having getting a usable Tailwind palette? There seems to be lots of different use case so I'd love to hear more.

That's awesome, I haven't kept up in what helps to get into AI recommendations. Guessing it's related to search result rankings? Not sure if the site would be in the training data. Curious if you asked about accessibility as that's my focus.


You're right.

People want to pretend fundamentals of economics don't exist AND the company has moral obligations to fulfill to consumers. It's laughable.

It's not just nVidia, I've seen other expensive consumer brands getting the same sentiments.


I suspect that the non-Rust improvements are vastly more important than you’re giving credit for. I think the go version would be 5x or 8x compared to the 10x, maybe closer. It’s not that the Rust parts are insignificant but the algorithmic changes eliminate huge bottlenecks.


Though Rust probably helps getting the design right, instead of fighting it.

From having sum-types to also having a reasonable packaging system itself.


> Composed resolvers are the headache for most and not seen as a net benefit, you can have proxied (federated) subsets of routes in REST, that ain't hard at all

Right, so if you take away the resolver composition (this is graph composition and not route federation), you can do the same things with a similar amount of effort in REST. This is no longer a GraphQL vs REST conversation, it's an acknowledgement that if you don't want any of the benefits you won't get any of the benefits.


There are pros & cons to GraphQL resolver composition, not just benefits.

It is that very compositional graph resolving that makes many see it as overly complex, not as a benefit, but as a detriment. You seem to imply that the benefit is guaranteed and that graph resolving cannot be done within a REST handler, which it can be, but it's much simpler and easier to reason about. I'm still going to go get the same data, but with less complexity and reasoning overhead than using the resolver composition concept from GraphQL.

Is resolver composition really that different from function composition?


Local non-utility does not imply global non-value. Of course there's costs and benefits, but it's hard to have a conversation with good-faith comparison using "many see it as overly complex" -- this is an analysis that completely ignores problem-fit, which you then want to generalize onto all usage.


People can still draw generalizations about a piece of technology that hold true regardless context or problem fit

One of those conclusions is that GraphQL is more complex than REST without commensurate ROI


Yeah, that’s a huge over-generalization


I'm building a yet another AI chat app.

My initial goal is to make a functional SillyTavern (AI roleplaying) replacement. SillyTavern builds prompts from a few rigid buckets (character, scenario, lore, system prompt, author's note...), which makes complex setups hard to manage. Content gets duplicated, settings have to be toggled in multiple places, and it’s easy to accidentally carry or modify state across conversations. Over time, it becomes difficult to tell what context is actually in effect.

I’m building an alternative that treats context as small, reusable pieces that can be composed and organized flexibly, rather than locked into fixed categories. Characters, settings, and behaviors can be mixed, reused, or temporarily enabled without duplication or manual cleanup, and edits preserve clear history instead of rewriting the past. The goal is to make managing complex context deliberate and controlled instead of fragile.

Although I’m trying to get the functionality required for roleplaying done first, the app is generic enough for other AI workflows where fine-grained, explicit context control is an improvement over existing chat interfaces. Think: start a new conversation with an assistant and start checking off rules, documents, and instructions to apply to the chat. Regenerate responses with clarifications or additional one-time context layers.


When there's a license you're either violating the license agreement or you're not. That's not an honor system.


No, "honor system" is very frequently used and understood to refer to a system where there are explicit rules but where the rules are not enforced via active surveillance.


It sounds like you want to make a judgement call: "they're too small to enforce this license agreement," so you get to pretend it's an honor system and not a license agreement.


The context was whether there is automatic enforcement, not whether you need to abide by the license.


Who's going to verify whether or not you're violating the license?


God


My question is what happened between when they went in the water and when they got off-site medical treatment. 7 hours seems like a long time. Is there on-site medical that would be doing something during that time?


Anecdote: My house mate in grad school was working in a national lab when an experiment caught fire and the fire consumed a certain amount of radioactive material. (Tiny little buttons used for calibrating detectors). He was on shift and was the person who discovered the fire and pulled the alarm.

Among other things, he had to sit inside an enclosure made of scintillator material for a period of time, to make sure he wasn't contaminated. Then he also got blood tests for heavy metals etc. They pretty much went by the book for all of these tests.

Also, the facility is the only place that's equipped for this kind of situation.


Realistically, there is little to do besides decontamination which I'm sure they're equipped to do on site.


It’s a process to come into a high radiation area, as well as, a process to come out; I’m sure the worker was not injured so they processed he/she out and decontaminated the individual and did a whole body count. Then release him to medical for evaluation…which in itself is a process.


But "it's not possible for xxx to be used securely" is a better premise if it deflects people who can't do it correctly.


Lying to people because you think you're smarter than them is bad policy.


That’s fine.


I have to agree. My experience working on a team with mixed levels of seniority and coding experience is that everybody got some increase in productivity and some increase in quality.

The ones who spend more time developing their agentic coding as a skillset have gotten much better results.

In our team people are also more willing to respond to feedback because nitpicks and requests to restructure/rearchitect are evaluated on merit instead of how time-consuming or boring they would have been to take on.


> My experience working on a team with mixed levels of seniority and coding experience is that everybody got some increase in productivity and some increase in quality.

Is that true? There have been a couple of papers that show that people have the perception that they are more productive because the AI feels like motion (you're "stuck" less often) when in reality it has been a net negative.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: