Hacker Newsnew | past | comments | ask | show | jobs | submit | damnitbuilds's commentslogin

I don't mind minor mistakes, if the news site corrects them when people inform them.

What is not acceptable is that news sites do not do these corrections.


"Who cares the server side software is open source if you still can't submit your taxes with your own python script?"

The government - and taxpayers - should care that having closed-source software means they are tied to the company that wrote it forever, so changes and bugfixes will be much more expensive.


Many colleges and employers unfairly favor women. That must stop. DEI must be made illegal and its removal be enforced.

Because society needs fairness.

But many employers also unfairly discriminate against mothers. That must also stop and its removal be enforced.

Because society needs fairness and children.


Headline: "The first privately funded space-based telescope"

Story: "[...] could be the largest privately funded space telescope in history"

Great journalism, theverge.


Social Science is a collection bad, unduplicated "research" that is used to justify the whacky opinions of its practitioners.

It does not "know" anything. It cannot tell us anything.


Well, he would say that, wouldn't he ?

The actual reason:

The BBC will stay on X because it has been caught being appallingly left-wing and now has to pretend to be more centrist until the fuss dies down and they can be trendy-lefty student politicians again.


Not buying it.

- How many of the people saying "6,7" know (or believe when told) that it comes from a comment on someone's height ?

- How many of the people saying "6,7" think that it still applies to someone's height ?

- It comes from a comment on basketball, where 6'7" might indeed be a good height to be, but how many people extrapolate that to a comment on career progression, as the author does ?

- How many of the people saying "6,7" have even left school, let alone encountered a career progression they are unhappy with ?

TL;DR: Nonsense.


If people don't like something, I think the motivation to act is more than if they do like something or are neutral. Human nature.

People will downvote a headline with positive comments on something they don't like.

But what do they do with a negative headline about something they don't like ? I guess they will upvote it to show they also don't like it.

So negative wins.

"ran GPT-4 sentiment analysis on full article text." I think most people vote based on headlines, not on article text.


You're right that most voting is headline-driven - that's definitely a limitation worth calling out.

I went with full article text because I wanted to capture what the content actually delivers, not just what the headline promises. A clickbait negative headline with a balanced article would skew results if I only looked at titles.

That said, you've got me thinking. It would be interesting to run sentiment on headlines separately and compare. If headline sentiment correlates strongly with article sentiment, your point stands. If they diverge, there might be something interesting about the gap between promise and delivery.

Might be a good follow-up analysis. Thanks for pushing on this.


Nice ! Very true that if you, for example, show a group a button with a given color, they will waste the meeting discussing the color and not what the button should do, so ASCII is a nice way to not have to do that.

I concur.

Just using AI to write boilerplate with a simple "Do what I did for this for these" request and it's like what I hoped The Future would be, 5 years ago. It's great !

But get over-ambitious with your requests and you get over-complex almost-solutions that, indeed, take up all your time to fix and I find this takes all the fun out of development - you have to restart coding something from scratch that you had "almost" finished a day ago.

But when you get to know the AI's limits, it is definitely a time-saver.

Hmm, I think I will trademark "Over-complex almost-solutions".


> But get over-ambitious with your requests

Have you first asked the bot if the request is overambitious? :)

> But when you get to know the AI's limits

That would be when you know whether its output can be trusted, right? And there's the problem. Software is way beyond the point where product can be adequately proved. We rely on process control. And stochastic parroting does not cut it.


I suspect the AI-Dunning-Kruger effect would come into play.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: