Hacker Newsnew | past | comments | ask | show | jobs | submit | derektank's commentslogin

AI should help people achieve their ultimate goals, not their proximate goals. We want it to provide advice on how to alleviate their suffering, not how to kill themselves painlessly. This holds true even for subjects less fraught than suicide.

I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.


I agree. I also think this ties in with personalization in being able to understand long term goals of people. I think the current personalization efforts of models are more of a hack than what they should be.

I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude

Outdoor lighting is a lot cheaper now than it was in the 1970s. I think we can give it another shot after 50 years. And it's worth pointing out that Arizona has gone without DST for the last 50 years and seems to be doing fine.

Interestingly part of the UK approach then was to make street lighting more efficient, around that time a lot of low-pressure sodium lamps were installed. They used so little energy they were only beaten for efficiency by LEDs in this decade, but the monochromatic yellow light was seen as unacceptable by some countries which continued to use inefficient high-pressure mercury then later high-pressure sodium.

I miss the humble SOX lamp to be honest, they made night look like night rather than a poor approximation of day. They also had benefits for wildlife, much of which is insensitive to the 589 nm wavelength as well as astronomy where the light is easily filtered out.


Thats only because it’s so hot in Arizona they want to sun to set earlier so it’s cooler in summer evenings.

Arizona is permanent standard time rather than permanent DST, and is thus unaffected by the permanent-DST winter mornings issue.

> And it's worth pointing out that Arizona has gone without DST for the last 50 years and seems to be doing fine.

Arizona observes year-round Standard Time:

* https://en.wikipedia.org/wiki/Time_in_Arizona

Most legislation seems to be proposing year-round Daylight Saving Time, e.g.,

* https://en.wikipedia.org/wiki/Sunshine_Protection_Act


It wasn't the best definition of AGI but I think if you asked an interested layman whether or not a system that can pass the Turing test was AGI 5 years ago, they would have said yes

An interested but uninformed layman.

When I was in college ~25 years ago, I took a class on the philosophy of AI. People had come up with a lot of weird ideas about AI, but there was one almost universal conclusion: that the Turing test is not a good test for intelligence.

The least weird objection was that the premise of the Turing test is unscientific. It sees "this system is intelligent" as a logical statement and seeks to prove or disprove it in an abstract model. But if you perform an experiment to determine if a real-world system is intelligent, the right conclusion for the system passing the test is that the system may be intelligent, but a different experiment might show that it's not.


Douglas Hofstadter wrote Gödel, Escher, Bach nearly 50-years ago, and it won a Pulitzer Prize, the National Book Award, and got featured in the popular press. It’s been on lots of college reading lists, from 2007 online coursework for high school students was available from MIT. The FBI concluded that the 2001 anthrax scare was in-part inspired by elements of the book, which was found in the attacker’s trash. Anyone who’s wanted to engage with the theories and philosophy surrounding artificial intelligence has had plenty of materials that get fairly in-depth asking and exploring these same questions. It seems like a lot of people seem to think this is all bleeding edge novelty (at least, the underlying philosophical and academic ideas getting discussed in popular media), but rather all of the industry is predicated on ideas that are very old philosophy + decades-old established technology + relatively recent neuroscience + modern financial engineering. That said, I don’t want to suggest a layperson is likely to have engaged with any of it, so I understand why this will be the first time a lot of people will have ever considered some of these questions. I imagine what I’m feeling is fairly common to anyone who’s got a very niche interest that blows up and becomes the topic of interest for the entire world. I think there’s probably some very interesting, as-yet undocumented phenomena occurring that’s been the product of the unbelievably vast amount of resources sunk into what’s otherwise a fairly niche kind of utility (in LLMs specifically, and machine learning more broadly). I’m optimistic that there will be some very transformational technologies to come from it, although whether it will produce anything like “AGI”, or ever justify these levels of investment? Both seem rather unlikely.

How are installers able to discourage competitors from driving down prices?

Are we crying tears over Muammar Gaddafi here? The man was a butcher and NATO was completely justified in imposing a no fly zone and supporting the National Transition Council in Libya. There was a UN Security Council resolution authorizing it.

Lots of things to criticize Sarkozy for but his support for the intervention is not one of them.


>Are we crying tears over Muammar Gaddafi here?

Yes, because removing Gaddafi from power after he yielded to international pressure to give up his nuclear-weapons ambitions makes it less likely that leaders will agree to give up nuclear ambitions in the future.

All leaders of countries know that no one would do to the leader of North Korea what France, Britain and the US did to Gaddafi -- because North Korea has nukes.


As a result the country entered dark ages with suffering unseen before. Of course Gaddafi was betrayed by the French. Just like France is betraying all of their former colonies.

Except that Libya never was a French colony I think.

Ottoman then Italian.

AWS GovCloud East is actually located in Ohio IIRC. Haven't had any issues with GovCloud West today; I'm pretty sure they're logically separated from the commercial cloud.

>The model isn't getting this capability by training on WikiPedia or Reddit

I don't know about the former, but the latter absolutely has sexually explicit material that could make the model more likely to generate erotic stories, flirty chats, etc.


OK, maybe bad example, but it would be easy to create a classifier to identify stuff like that and omit it from the training data if they wanted to, and now that they are going to be selling this I'd assume they are explicitly seeking out and/or paying for creation of training material of this type.

The harms associated with someone creating a deep fake of you are real but they're pretty insignificant compared to the harms associated with being sex trafficked or being exposed to an STI or being unable to find traditional employment after working in the industry.

You couldn’t just photoshop that before ai came out?

What if you get a model that is 99% similar to your “target” - what we do with that?


Think about the change we saw in combat death tolls when things went from flintlock muskets to machine guns, or when battleships gave way to aircraft, and how many people died unnecessarily due to the generals who were slow to update their tactics. Deepfakes are like that because they lower the cost and improve the success rates enough to be transformative and they cause harm which can’t easily be countered. We’re not going to instantly train society to be better at media literacy and the police can’t just ignore reports of sex crimes, so we’re just having to accept that it’s easier to hurt people than it used to be.

Sure, someone skilled could spend an hour or so photoshoping someone nude. But any teenager can do that to a classmate in 30 seconds with ai

So just because the poor can do what the rich could do before what it means?

Before only rich can afford to pay a pro to do photoshop. Now any poor person can get.

So why when rich can is fine and when everyone can is a problem?


Uhh it wasn’t fine when the rich did it?

Would you support installing public spy cams in everyone's bedrooms so as to end the demand for human trafficking in porn?

No? And I didn't suggest deepfakes should be legal.

I was just pointing out that when you're talking about the scale of harm caused by the existing sex industry compared to the scale of harm caused by AI generated pornographic imagery, one far outweighs the other.


>Firstly, they are not coming for my job, they're coming for all jobs.

They're not coming for all jobs. There are many jobs that exist today that could be replaced by automation but haven't been because people will pay a premium for it to be done by a human. There are a lot of artisan products out there which are technically inferior to manufactured goods but people still buy them. Separately, there are many jobs which are entirely about physical and social engagement with a flesh and blood human being, sex work being the most obvious, but live performances (how has Broadway survived in an era of mass adoption of film and television), and personal care work like home health aids, nannies, and doulas are all at least partially about providing an emotional connection on top of their actual physical labor.

And there's also a question of things that can literally only be done by human beings, because by definition they can only be done by human beings. I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.


>They're not coming for all jobs.

So we are all going to just do useless bullcrap like sell artisan clay pots to each other and pimp each other out? Wow, some future!

I just don't know how this goofball economy is going to work out when a handful elites/AI overlords control everything you need to eat and survive and everyone else is weaving them artisan wicker baskets and busking (jobs which are totally automated and redundant, have you, but the elites would keep us around for the sentimental value).

>I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.

Yeah this is one plausible future, we could all be lab rats testing the next cancer medicine or donating organs to the elites. I can't imagine the conditions will be very humane, being that the unwashed masses will have basically no leverage to negotiate their pay.


These AI apologists are also totally ignorant about the actual usefulness of these models. The tech billionaires have been building bunkers while pumping up the bubble. Tell me, why would one need a bunker if AI brings us that utopia? Non rich humans are just consumers to them, they will exploit the planet until it's uninhabitable for their own "gain". We are a waste product for those who hold power. They are showing this to us openly and all the time... how can people be so blind?

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: