Hacker Newsnew | past | comments | ask | show | jobs | submit | tsss's commentslogin

You don't need an educated workforce if you have machines that can do it reliably. The more important question is: who will buy your crap if your population is too poor due to lack of well paying jobs? A look towards England or Germany has the answer.


The top 10% of households already account for more than half of consumer spending in the US


Hmmm, that doesn't seem right. I'm having a hard time finding an actual consumption number, but I am confident it's well below 50%.

The top 10% of households by wage income do receive ~50% of pre-tax wage income, but:

1) our tax system is progressive, so actual net income share is less

2) there's significant post-wage redistribution (social security/medicaid)

3) that high income households consume a smaller percent of their net income is a well established fact.



For what it's worth I have been using Gemini 2.5/3 extensively for my masters thesis and it has been a tremendous help. It's done a lot of math for me that I couldn't have done on my own (without days of research), suggested many good approaches to problems that weren't on my mind and helped me explore ideas quickly. When I ask it to generate entire chapters they're never up to my standard but that's mostly an issue of style. It seems to me that LLMs are good when you don't know exactly what you want or you don't care too much about the details. Asking it to generate a presentation is an utter crap shoot, even if you merely ask for bullet points without formatting.


> It's done a lot of math for me that I couldn't have done on my own (without days of research),

Isn't the point of doing the master's thesis that you do the math and research, so that you learn and understand the math and research?


I bet they were talking about how people didn't do long division when the calculator first came out too. Is using matlab and excel ok but AI not? Where do we draw the line with tools?


OP said they "generated entire chapters"


Apparently not. This is the most perfect example I've seen of "I can recite it, but I don't understand it so I don't know if it's really right or not" that I've seen in a while.


I do understand it. I just don't have the overview of all the algorithms that LLMs have.


You never got a properly dimensioned wireframe model from your UI designer? That's a specification too.


I don't do a lot of frontend work. But when I did, the mocks were almost always best case; no mocks for when there was missing data (which was often). I did work on one project with an interaction designer, which was great --- having all the flows laid out was awesome, but it was only the happy path.


And those social skills, confidence, emotional vulnerability appear where exactly on the profile?

Please. Online dating is 80% looks, 10% height and 10% money.


You have photos and text, try making them funny and showing off that you’re not afraid to hide things about yourself that others would hide- maybe emphasize and show a weird hobby or interest that other people might be worried to share. Of course this only works if you’re actually funny or actually have unusual hobbies. One guy I know literally posted silly photos of himself awkwardly pole dancing and got tons of matches. Even if you’re tall and rich, I advise being interesting instead and making it impossible to tell those things, if you want to match with people that are not boring and shallow. Have a good time and be light hearted- having a bad attitude about how dating is unfair will be impossible to hide and looks like a major personality flaw.


There has never been a functioning and just legal system in the history of mankind. Not to mention that what is "just" is very much up to debate.


Well, sure, it's all relative and no system is perfect. Not every mother is perfect, doesn't mean I escort mine around the house at gunpoint whenever she visits.


If this code is too complicated for you then you should reconsider your career.


Agreed. Especially if they chose to use Scala...


TFA is indignantly reacting to the least charitable interpretation of what his employer has asked him to do. I'd like to know the honest shape of the code his manager rejected before judging the employer over this.


> less of a therapist and more of a personal validation machine.

But that's exactly what a therapist is.


Sometimes? A lot of the time the point of therapy is to challenge your statements and get you to the heart of the issue so you can mend it or recognize things that are off base and handle things differently. A lot of the relationship is meant to be a supportive kind of conflict so that you can get better. Sometimes people really do need validation, but other times they need to be challenged to be improved. As it stands today, AI models can't challenge you in the way a human therapist can.


Therapists are incentivized to tell the people who paid them what they want to hear.


Any field has hacks. Telling someone what they want to hear and helping get someone where they want to be are different things. Quality professionals help people reach their goals without judgment or presumption. That goes for mental health professionals as well as any professional field.


Not every field has quacks tho


A bad one. A good therapist will figure out what you need to hear, which does not always overlap with what you want to hear.


Absolutely true but I don't think a person should rely on an LLM alone for that reason. It's just not smart and insightful enough.

It's more like that really good friend that's not a therapist but always tells you what you want to hear and makes you feel a bit better until you get to your actual therapist.


I should’ve been clear on that but I absolutely agree; an LLM is a bad therapist at best, and a hallucinating ego stroking algorithm eventually.


Yes but still, talking to someone helps. No matter what they say back. If an LLM is the only thing around at the moment (e.g. in the middle of the night) this can be useful for therapeutic purposes.

Therapy isn't only about what the therapist says to you. There is a lot about you talking to them and the process that creates in the mind. By sharing your thoughts with someone else you view them from a different perspective already.


Then it’s actually better to talk to yourself. Not an LLM that’s trained on all of the internet’s combo of valuable and unhinged takes on all matter of trauma.


It's not the same. An internalised conversation doesn't have the same effect.

And I have good experiences with the LLM for this purpose. It's probably my prompt and RAG that I provided with a lot of my personal stuff but even the uncensored model I use is always supportive and often comes up with interesting takes / practical suggestions.

I don't rely on it for advice but for talking to when real friends aren't around and there's something urgent I'm worried about it's really good.


Would you be willing to email me at me@xeiaso.net? I have some questions I'd like to ask you as part of research for a followup piece. No judgement, I just want to know how it's affected your life.


Sure!


I honestly would recommend against that but we’re all free to do with our brains as we please. I just hope it’s not as destructive as I intuit it to be…


Anyone interested in better understanding a complex system can benefit from a qualified professional’s collaboration, often and especially when an outside perspective can help find different approaches than what appear to be available from inside the system.


Not really. Good therapy is uncomfortable. You are learning how to deal with thought patterns that are habitual but unhealthy. Changing those requires effort, not soothing compliments and validation of the status quo.


These people would get hospitalized for any reason.


Good for them.


This paper is so important. Just imagine how much pain could have been avoided if the Gitlab and Github developers read this before making the steaming shit pile of Github Actions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: