I tend to be sympathetic to arguments in favor of openly accessible AI, but we shouldn't dismiss concerns about unaligned AI as frivolous. Widespread unfiltered accessibility to "unaligned" AI means that suicidal sociopaths will be able to get extremely well informed, intelligent directions on how to kill as many people as possible.
It may be that the best defense against these terrorists is openly accessible AI giving directions on protecting from these people. But we can't just take this for granted. This is a hard problem, and we should consider consequences seriously.
The Aum Shinrikyo cult's Sarin gas attack in the Tokyo subway killed 14 people - manufacturing synthetic nerve agent is about as sophisticated as it gets.
In comparison, the 2016 Nice truck attack, which involved driving into crowds killed 84.
> suicidal sociopaths will be able to get extremely well informed, intelligent directions on how to kill as many people as possible
Citizens killing other citizens is the least of humanities issues. It's the governments who are the suicidal sociopaths historically who can get the un-nerfed version that is the bigger issue. Over a billion people murdered by governments/factions and their wars in the last 120 years alone.
Governments are composed of citizens; this is the same problem at a different scale. The point remains that racing to stand up an open source uncensored version of GPT-4 is a dangerous proposition.
That is not how I'm using the word. Governments are generally run by a small party of people who decide all the things - not the hundreds of thousands that actually carry out the day-to-day operations of the government.
Similar to how a board of directors runs the company even though all companies "are composed of" employees. Employees do as they are directed or they are fired.
I think at scale we are operating more like anthills: meta-organisms rather than individuals, growing to consume all available resources according to survival focused heuristics. AI deeply empowers such meta-organisms, especially in its current form. Hopefully it gets smart enough to recognize that the pursuit of infinite growth will destroy us and possibly it. I hope it finds us worth saving.
Yes, and look at the extremism and social delusions and social networking addictions that have been exacerbated by the internet.
On balance, it's still positive that the internet exists and people have open access to communication. We shouldn't throw the baby out with the bathwater. But it's not an unalloyed good, we need to recognize that the technology brought some unexpected negative aspects came along with the overall positive benefit.
This also goes for, say, automobiles. It's a good thing that cars exist and middle class people can afford to own and drive them. But few people at the start of the 20th anticipated the downsides of air pollution, traffic congestion and un-walkable suburban sprawl. This doesn't mean we shouldn't have cars. It does mean we need to be cognizant of problems that arise.
So a world where regular people have access to AIs that are aligned to their own needs is better than a world in which all the AIs are aligned to the needs a few powerful corporations. But if you think there are no possible downsides to giving everyone access to superhuman intelligence without the wisdom to match, you're deluding yourself.
I've never seen another person mention this book! This book was one of the most philosophically thought provoking books I think I've ever read, and I read a fair amount of philosophy.
I disagree with the author's conclusion that violence is justified. I think we're just stuck, and the best thing to do is live our lives as best as possible. But much like Marxists are really good at identifying the problems of capitalism but not at proposing great solutions (given the realities of human nature), so is the author regarding the problems of technology.
Yeah, anti-technologism is so niche of an idea yet entirely true. So obvious that is hidden in plain sight, that it’s technology and not anything else that is the cause of so many problems of today. So inconvenient that it’s even unthinkable for many. After all, technology _is_ what if not convenience? Humanity lived just fine, even though sometimes with injustice and corruption, there was never a _need_ for it. It’s not the solution to those problems or any other problem. I also don’t agree that violence is justified by the justifications of the author, even though I think it’s justified by other things and under other conditions.
Quite a few of them work just fine. Dissolving styrofoam into gasoline isn't exactly rocket science. Besides that, for every book that tells you made up bullshit, there are a hundred other books that give you real advice for how to create mayhem and destruction.
It may be that the best defense against these terrorists is openly accessible AI giving directions on protecting from these people. But we can't just take this for granted. This is a hard problem, and we should consider consequences seriously.