Hacker Newsnew | past | comments | ask | show | jobs | submit | ljm's commentslogin

Depends on what country you're in. In the UK, the banks are often held liable for various scams that involve the transfer of money, so they up the security over and over again. A bank will rightly argue why it's responsible for an old granny sending her life savings to her new lover in Namibia, so it seeks to block that transaction in the first place.

Some of that liability is fair but most of it is the government telling the banks to account for the loss when someone is scammed. They are obviously going to mitigate that as much as they can.


Rooted devices don't enable that transaction. That's all social engineering.

It's all social engineering now but that's because phones are secure and remote attestation infrastructure is in place.

Go back fifteen years and malware is absolutely submitting bank transactions after the user does a 2FA.

https://krebsonsecurity.com/2010/03/crooks-crank-up-volume-o...


and grandmas don't root their devices.

As a devils advocate grandma would have no idea if she was buying or got her device rooted by someone else.

> so they up the security

They're upping the surveillance, not the security, quite demonstrably.

This is meant to protect /them/ from liability and not /you/ from loss.


Knowing what total comp is like for those companies, I'm sure Facebook more than exceeded the price one might put on ethics.

I've personally resigned from positions for less and it hasn't cost me much comfort in life (maybe some career progression perhaps but, meh).


I think Go's concept of error wrapping is probably unusual to newcomers to the language who might be used to, say, pulling in a dependency for error handling (logrus or whatever) when it's all there in the stdlib in what Go has decided to be the idiomatic way to do errors and logging.

It's nice when you understand how to do it well and move on from, say, printing errors directly where they happen rather than creating, essentially, a stack of wrapped errors that gets dumped at an interface boundary, giving you much more context.


I think it's probably confusing until you understand interfaces, which coming from other languages you might not be familiar with (my guess for what happened in this blog post). If you don't know what an interface is then maybe you assume err.Error() is some kind of string, without realizing Error() is just a required function but the err could be whatever type.

My understanding is that using `error` as a return type performs type erasure and that the only information given to the caller is the value of the `e.Error()` function. I now understand that this isn't the case.

I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.

My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.

The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.

The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.

Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.


Growth hacking is a polyp on the arsehole of technology, to paraphrase

Reminds me a bit of Snaplet before it embarked on its incredible journey to get acquired by Supabase and shut down.

I like the concept but the painpoint has never been around creating realistic looking emails and such like, but creating data that is realistic in terms of the business domain and in terms of volume.


Appreciate the Snaplet comparison, they were doing good work. You're right that realistic looking strings are the easy part. We're focused on relational integrity first (FKs, constraints, realistic cardinality), but business domain logic is the next layer. What kinds of rules would be useful for you? Things like weighted distributions, time-based patterns, conditional relationships?

The realistic cardinality is actually a good start (the problem with things like using Faker for DB seeds being that everything is entirely too random).

If one were be able to use metrics as source then, depending on the quality of the metrics, it might be possible to distribute data in a manner similar to what's observed in production? You know, some users that are far more active than others, for example. Considering a major issue with testing is that you can't accurately benchmark changes or migrations based on a staging environment that is 1% the size of your prod one, that would be a huge win I think even if the data is, for the most part, nonsensical. As long as referential integrity is intact the specifics matter less.

Domain specific stuff is harder to describe I think. For example, in my setup I'd want seeds of valid train journeys over multiple legs. There's a lot of detail in that where the shortcut is basically to try and source it from prod in some way.


This is useful. What if you ran a CLI locally that extracts just the statistical profile from prod cardinality, relationship ratios, etc. and uploaded that? We'd never touch your database, you just hand us the metrics and we match the shape.

We do exactly that in one of our products. It's called data profiling.

I'd be willing to try that out :) a CLI would be great, even as a sandbox tool

Really appreciate the input. I'll make sure to give you early access once we implement this, I'll keep you posted.

Hey! Snaplet founder here. Want to clarify that it was not acquired by Supabase; I shutdown the startup and found roles for some of the team at Supabase.

The code remains:

- https://github.com/supabase-community/seed - https://github.com/supabase-community/copycat - https://github.com/supabase-community/snapshot

This looks like a great project, wishing them all the best on the journey.


Thanks!! means a lot coming from you. Best of luck at Supabase.

Thanks, but I am not at Supabase! I ended up going back to building RedwoodJS and took over the project, and now have a consultancy.

I'll put money on Windows 12 being rebranded to either Windows Copilot or Copilot OS for Windows.

The concept of Windows and operating systems is so 90s. The next version will be Copilot Portal, your access point to the Copilot ecosystem.

How long until the marketing geniuses at Microsoft launch “Copilot Copilot for Copilot?”

I think it’ll be called “Copilot Substem for Linux” or something incomprehensible like that.

It's okay, you have to use cloud copilot anyway since you'll barely have enough RAM for your browser moving forward.

Windows 13 is sure to be another avoided version (like Windows 9 was skipped).

The marketers will want to rename 12 with the argument it has to change before 13 anyways.

Then again Apple are just as annoying, releasing 26 in 2025.


Windows 14 will just be called Pilot, where the user just sits back and watches the computer “do its thing.”

Microsoft Restarting would be more fitting

Microsoft Copilot Recall Ai Pro.

Microsoft Windows X Copilot One Series Y

One of the most frustrating and perhaps thought-terminating clichés on the internet and social media at large is alluded to in this reply:

“I personally could not view this page [because I turned off JS], therefore I will dismiss it out of hand as it didn’t cater to my needs.” A choice made by the consumer somehow makes the author accountable for it.

Or more succinctly, “but what about me [or people I’ve anointed myself as spokesperson for]?”spoken by someone not the intended audience for the piece, trying to make the author responsible for their need.

The answer to which, I think, is either, “it’s not for you then so move on,” or perhaps even “misery is optional, just enable JS ffs.”

The idea that the creator of a work must bend to the will of those that consume it seems to be highly prevalent, and is pretty much at odds with creativity itself.


I'm going to have to bite at the bait here: your post is guilty of what it's critiquing, and to a larger degree than the post being replied to.

I have found that HN is, ironically, a horrible place to post experimental work on, with a few exceptions - e.g. things "written in Rust" etc. I think it's because the majority of the commentators here haven't really made anything from scratch.

Personally, I think if your JSON needs comments then it's probably for config or something the user is expected to edit themselves, and at that point you have better options than plain JSON and adding commentary to the actual payload.

If it's purely for machine consumption then I suspect you might be describing a schema and there are also tools for that.


I am basically rawdogging Claude these days, I don’t use MCPs or anything else, I just lay down all of the requirements and the suggestions and the hints, and let it go to work.

When I see my colleagues use an LLM they are treating it like a mind reader and their prompts are, frankly, dogshit.

It shows that articulating a problem is an important skill.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: