Hacker Newsnew | past | comments | ask | show | jobs | submit | _vertigo's commentslogin

Every action is self-interested if you squint enough

There is a difference between hiding your identity for the purpose of privacy and for the purpose of deception.

There’s a difference between writing anonymously and assuming a false identity.


>assuming a false identity

Whose identity is being faked?

I always grew up with the assumption that everything on the internet is most likely fake.

That girl you're talking to? Probably a dude. The Nigerian prince asking for money. Probably a scammer.

Unless it's a government website with secure ID login, every account you see online is probably fake.


If you invent a person and assume their identity, that's still a false identity.

> I always grew up with the assumption that everything on the internet is most likely fake.

How you interact with the internet is not really relevant to the discussion. The average person does not interact with the internet in the same way that people on this forum do, so that should not be the yardstick by which we judge this.


>assuming a false identity

Is vertigo22 a real identity or a fake one?

>The average person does not interact with the internet in the same way that people on this forum do

How other "average" people (am I not average too?) choose to use the internet is irelevant to everyone needing to doxx themselves online from now on.


That’s stupid. We can have a law that protects users from bullshit like this. Lacking technical savvy should not mean you forfeit the right to your data.


I think this is speaking in the absense of sufficient protections like laws. Because how does it help that a hypothetical law protects you if that law doesn't exist?


I think you’ve missed some context now, we’re responding to someone who is being dismissive of the pain experienced by users in these cases.

Lies, damned lies, and statistics


Some guy writes a whimsical blog post about wanting to work at a company that takes pride in their work and the top comment criticizes the blog post because they didn’t specify explicitly in their 300 word post that the company has to not be evil too.

Jesus, we’re fucked aren’t we?


Yes, obviously, everyone knows that. When all you have to file is a 1040, reading one of the instructions documents is fine. When you have to use several forms it start to add up.


I've filed my own taxes for years and have a complicated set up; real estate, stocks, rsus, espps, private shares, amt, etc ... It's extremely straightforward and takes less time than using turbotax if you've done it before. The instructions are obvious.

You can also call the IRS and be told for free what the rules are. People pay h&r block and Intuit when the irs is extremely responsive and will connect you with an actual American irs rep to answer your questions.

People pay for the software because they've been marketed to not because they need it. For the situations that are actually hard, then a software like TurboTax is useless.

Also if you get the numbers wrong the IRS just corrects it


> Yes, obviously, everyone knows that.

It's pretty clear that daemonologist did not know that. Which is weird, given that all the tax law the average USian needs to know is "Read and follow the instructions for Form 1040.".

(RIP 1040-EZ. You were a good form.)

Also, I've had to file several forms in the past. It 'adds up', but it's all mechanically following instructions... not anything difficult.


Following forms is easy. The hard part is knowing if you need to file a form.


> The hard part is knowing if you need to file a form.

In my experience, the form instructions tell you clearly when you should and should not file a form. They also clearly indicate which other forms are to be filed when you meet specific conditions (income limits, possession of specific other forms, etc.).

Granted, I don't run a business, nor do I have exceptionally complex finances, so there are a great many IRS form instructions that I have never seen. Because I've not see them all, I'd never say that every such form instruction is clear, but the ones I've encountered have been.


Please. Are you implying we need AI to the same degree we need clean water?

Your chemicals in river analogy only works if there were also a giant company straight out of “The Lorax” siphoning off all of the water in the river.. and further, the chemicals would have to be harmless to humans but would cause the company’s machines to break down so they couldn’t make any more thneeds.


The problem is:

1. The machines won't "break", at best you slightly increase when they answer something with incorrect information.

2. People are starting to rely on that information, so when 'transformed" your harmless chemical are now potentially poison.

Knowing this is possible, it (again "to me") becomes highly un-ethical.


The onus to produce correct information is on the LLM producer. Even if its not poisoned information it may still be wrong. The fact that LLM producers are releasing a product that is producing information that is not verified is not a bloggers fault.


Honestly, AI could have written this.


That tldr table at top looks a lot like what perplexity provides at the bottom...


So your take is that if they are therapists, it’s a conflict of interest, and if they aren’t therapists, they’re not qualified to make the assessment?


That is correct. I don’t think this study can be made in a reliable way.


This is an interesting take. By this perspective, it's essentially impossible to ever gauge the efficacy of AI in doing anything, because the people who will know how to measure the quality of that thing are also the people who will be displaced by showing the AI can do that thing. In fact, you could probably argue that every study ever is worthless, because studies are generally performed by people who know the subject matter and it's basically impossible to be unbiased on a topic if you're also highly knowledgable about said topic.

In reality, what matters is the methodology of the study. If the study's methodology is sound, and its results can be reproduced by others, then it is generally considered to be a good study. That's the whole reason we publish methodologies and results: so others can critique and verify. If you think this study is bad, explain why. The whole document is there for you to review.


I think you are correct, and incorrect. However: set and setting. Another of Lanier's observations, which he relates to LLMs, is the Boeing "smart" stall preventer which crashed two <strike>Dreamliners</strike> [correction:] 737 MAXes.

Who can argue with a stall preventer, right? What one can, and has been exposed / argued with, is the observation that information about the operation of the stall preventer, training, and even the ability to effectively control it depended on how much the airline was willing to pay for this necessary feature.

So in reality, what matters is studying the methodology of set and setting, not how the pieces of the crashed airship ended up where they did.


I'm not exactly sure how this relates to my comment above. An analysis of an airline crash and a study are not the same thing.

As it relates to study design, controlling for set and setting are part of the methodology. For example, most drug studies are double-blinded so that neither patients nor clinicians are aware of whether the patient is getting the drug or not, to reduce or eliminate any placebo effect (i.e. to control for the "set"/mental state of those involved in the study).

There are certainly some cases in which it's effectively impossible to control for these factors (i.e. psychedelics). That's not what's really being discussed here, though.

An airline crash is an n of 1 incident, and not the same as a designed study.


> it's essentially impossible to ever gauge the efficacy of AI in doing anything...

... compared to humans? Yes. This is a philosophical conundrum which you tie yourself up in if you choose to postulate the artificial intelligence as equivalent to, rather than a simulacrum of, human intelligence. We fly (planes): are we "smarter" than birds? We breathe underwater: are we "smarter" than fish? And so on.

How do you discern that the "other" has an internal representation and dialogue? Oh. Because a human programmed it to be so. But how do you know that another human has internal representation and dialogue? I do (I have conscious control over the verbal dialogue but that's another matter), so I choose to believe that others (humans) do (not the verbal part so much unfortunately). I could extend that to machines, but why? I need a better reason than "because". I'd rather extend the courtesy to a bird or a fish first.

This is an epistemological / religious question: a matter of faith. There are many things which we can't really know / rigorously define against objective criteria.


This, similar to your other comment, is unrelated to my comment.

This is about determining if AI can be a equivalent or better (defined as: achieving equal or better clinical outcomes) therapist than a human. That is a question that can be studied and answered.

Whether artificial intelligence accurately models human intelligence, or whether an airplane is "smarter" than a bird, are entirely separate questions that can perhaps serve to explain _why/how_ the AI can (or can't) achieve better results than the thing we're comparing against, but not whether it does or does not. Those questions are perhaps unanswerable based on today's knowledge. But they're not prerequisites.


Well, that’s helpful to know so that other people can know to ignore what you write on this


I think that depends on whether your definition of “doomerism” is the same as theirs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: