Depends on what country you're in. In the UK, the banks are often held liable for various scams that involve the transfer of money, so they up the security over and over again. A bank will rightly argue why it's responsible for an old granny sending her life savings to her new lover in Namibia, so it seeks to block that transaction in the first place.
Some of that liability is fair but most of it is the government telling the banks to account for the loss when someone is scammed. They are obviously going to mitigate that as much as they can.
I think Go's concept of error wrapping is probably unusual to newcomers to the language who might be used to, say, pulling in a dependency for error handling (logrus or whatever) when it's all there in the stdlib in what Go has decided to be the idiomatic way to do errors and logging.
It's nice when you understand how to do it well and move on from, say, printing errors directly where they happen rather than creating, essentially, a stack of wrapped errors that gets dumped at an interface boundary, giving you much more context.
I think it's probably confusing until you understand interfaces, which coming from other languages you might not be familiar with (my guess for what happened in this blog post). If you don't know what an interface is then maybe you assume err.Error() is some kind of string, without realizing Error() is just a required function but the err could be whatever type.
My understanding is that using `error` as a return type performs type erasure and that the only information given to the caller is the value of the `e.Error()` function. I now understand that this isn't the case.
I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.
My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.
The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.
The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.
Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.
Reminds me a bit of Snaplet before it embarked on its incredible journey to get acquired by Supabase and shut down.
I like the concept but the painpoint has never been around creating realistic looking emails and such like, but creating data that is realistic in terms of the business domain and in terms of volume.
Appreciate the Snaplet comparison, they were doing good work. You're right that realistic looking strings are the easy part. We're focused on relational integrity first (FKs, constraints, realistic cardinality), but business domain logic is the next layer. What kinds of rules would be useful for you? Things like weighted distributions, time-based patterns, conditional relationships?
The realistic cardinality is actually a good start (the problem with things like using Faker for DB seeds being that everything is entirely too random).
If one were be able to use metrics as source then, depending on the quality of the metrics, it might be possible to distribute data in a manner similar to what's observed in production? You know, some users that are far more active than others, for example. Considering a major issue with testing is that you can't accurately benchmark changes or migrations based on a staging environment that is 1% the size of your prod one, that would be a huge win I think even if the data is, for the most part, nonsensical. As long as referential integrity is intact the specifics matter less.
Domain specific stuff is harder to describe I think. For example, in my setup I'd want seeds of valid train journeys over multiple legs. There's a lot of detail in that where the shortcut is basically to try and source it from prod in some way.
This is useful. What if you ran a CLI locally that extracts just the statistical profile from prod cardinality, relationship ratios, etc. and uploaded that? We'd never touch your database, you just hand us the metrics and we match the shape.
Hey! Snaplet founder here. Want to clarify that it was not acquired by Supabase; I shutdown the startup and found roles for some of the team at Supabase.
One of the most frustrating and perhaps thought-terminating clichés on the internet and social media at large is alluded to in this reply:
“I personally could not view this page [because I turned off JS], therefore I will dismiss it out of hand as it didn’t cater to my needs.” A choice made by the consumer somehow makes the author accountable for it.
Or more succinctly, “but what about me [or people I’ve anointed myself as spokesperson for]?”spoken by someone not the intended audience for the piece, trying to make the author responsible for their need.
The answer to which, I think, is either, “it’s not for you then so move on,” or perhaps even “misery is optional, just enable JS ffs.”
The idea that the creator of a work must bend to the will of those that consume it seems to be highly prevalent, and is pretty much at odds with creativity itself.
I have found that HN is, ironically, a horrible place to post experimental work on, with a few exceptions - e.g. things "written in Rust" etc. I think it's because the majority of the commentators here haven't really made anything from scratch.
Personally, I think if your JSON needs comments then it's probably for config or something the user is expected to edit themselves, and at that point you have better options than plain JSON and adding commentary to the actual payload.
If it's purely for machine consumption then I suspect you might be describing a schema and there are also tools for that.
I am basically rawdogging Claude these days, I don’t use MCPs or anything else, I just lay down all of the requirements and the suggestions and the hints, and let it go to work.
When I see my colleagues use an LLM they are treating it like a mind reader and their prompts are, frankly, dogshit.
It shows that articulating a problem is an important skill.
Some of that liability is fair but most of it is the government telling the banks to account for the loss when someone is scammed. They are obviously going to mitigate that as much as they can.
reply