Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I am not afraid of robots. I am afraid of people (garymarcus.substack.com)
69 points by headalgorithm on April 2, 2023 | hide | past | favorite | 25 comments


Wealthy profit-maximizing executives in perfect control of an AI is far more scary to me than an autonomous AI is. At least with the AI in charge, there's hope that it might act in humanity's interests.


TBH, I'm more concerned about their imperfect control. At least if some group of executives has perfect control, then it's possible for someone to force them to turn it off. As with Satya having his team "fix" an issue in 24 hours, though, there's a dog-balancing-plates phenomenon with the current state of development.


the wealthy profit-maximizing executives paid for the engineers who trained the AI, and ran it through the legal department designed to protect the wealth and status of the corporation they are executives of. Under present circumstances, I don't know what an "autonomous" AI might even look like; someone must have trained it, and if another AI trained it, that was also at some point in the chain trained by these same institutional actors...


I saw a bunch of stories recently about how social media has increased mental illness problems amongst our children, yet I don't see the same level of alarm about that. Mentally ill people shoot up schools with AR-15s yet nobody is banning them. Why AI? What's different? Color me suspicious of the motives of these elites. My comment is not a well reasoned scientific comment, it's an emotional one. I'm feeling much more stressed about children in distress than I am about AI.


An AR-15 is a single weapon that can be used in a specific context by one individual to a limited degree. AI owned by major organizations is a highly complex thing that can be used by all sorts of multiple actors simultaneously, at scale, in all sorts of ways ranging from good, to iffy to ambiguous to dangerous to extremely dangerous. AR-15's in the hands of mentally unstable people or those with malevolent intentions of any age can be very dangerous things, but the nature and level of the danger is of a totally different, much more limited type to powerful AI tools in the hands of mentally unstable powerful people, or self.interested leaders and organizations..

Note; Edits for a bit more context.


AI doesn't have be out to get us to cause lots more social and economic instability (though ironically ChatGPT seems to be poised to replace a lot of white-collar workers - unlike the robot apocalypse predictions about replacing blue-collar ones... not that both can't happen simultaneously, of course).


It feels like AI and automation instead of freeing us from drudgery are poised to free us from art and science so we have more time for drudgery.


Because they are being deployed wrong. We should be lining up our transition to post-work so we can spend as much time on art and science as we damn well please, and wholly ignore commercialized drudgery. Instead we are using it to commercialize art and science and turn it into drudgery which we already said we would automate.

Stupidity singularity achieved.


>I saw a bunch of stories recently about how social media has increased mental illness problems amongst our children, yet I don't see the same level of alarm about that

wait til you see what AI will do to children's brains, once the last remnants of human interaction are excised from their lives.


Maybe the Butlerian Jihad was treating AI as a virus; it took a hundred years to burn its way through susceptible hosts, and then humanity was inoculated against it. :P


(The chat boxes themselves could easily exaserbate the mental illness issues for children.)

One critical difference is the growth factor. AR-15 shootings don't double every year. Teen suicide don't double every year.

https://epochai.org/blog/compute-trends

Another is prevalence. This tech can be the ghost in the machine animating all sorts of processes, interfaces, devices, content. It will affect you at work, it will affect you at home, it will affect you in society. This is the entire point of developing machine intelligence. How safe, reliable, and controlable is this tech as of now? On that list of yours, there is a piece of technology (primitive technology btw) that is illegal to have: a fully automatic rifle. Calls for regulation and restriction are not unreasonable. How we define and enforce them is the challenge, but the need should be self-evident.

> the motives of these elites

And yet another reason. The powerful variants of this technology has very high financial barrier to entry. The sort of MI that could truly fuck with us is only in reach of nation states and companies like Google, Microsoft, etc. The latter set (we are repeatedly told here and elsewhere) are only obligated to further the profits of shareholders.

A pause also lets the emerging wave of open source and operationally accessible projects (such as *.cpp, etc.) to catch up a bit. This disempowers the "elite" but it is an undeniable plus for the plebes.


It's suspicious though that generally the same people that want to ban books because they are "woke" want to ban AI because it is "woke", meanwhile teens commit suicide and get murdered and those same people do nothing.


I think machine intelligence (which is the term I think to be more appropriate) can be a boon to humanity. I firmly believe this. It could be the beginning of a great relationship.

But the q is: is this machines domesticating man, or man domesticating the machine?

So I hope you agree that we need to set the boundaries for this relationship.


Sure I do. I'd go so far as requiring a license to use AI, the way we do far automobiles and guns. This experiment we are doing releasing it into the wild with all the mental illness, drug taking and religious fanaticism going is not going to end well. I predict we'll have to scale it back at some point and do more controlled access to advanced AI.

But Elon Musk's call to stop development? Not only is that unworkable, I don't believe for one second the he and his team would stop development. I wasn't born yesterday.

And Eliezer Yudkowsky Time article? I saw Peter Doocy quoting from it on national TV at a White House press conference that if we didn't stop development of AI now that we were all going to die and instantly thought, what is Rupert Murdoch up to now, scaring his Christian viewers with this stuff?

Then I read a Fox News Headline "Elon Musk’s warnings about AI research followed months-long battle against ‘woke’ AI" and felt sick inside. I want the America I grew up in before a guy from Australia and another from South Africa ruined it.

https://www.foxnews.com/politics/elon-musks-warnings-ai-rese...


> I want the America I grew up in before a guy from Australia and another from South Africa ruined it.

Hah. I moved here from Iran and I want the America I landed in as well. I miss it. We've lost something (but that happened long before the boy wonder arrived. the other character was present..)

~

I guess I've missed a lot of this drama since I banished both TV and NYTimes a bit after the time of hanging chads. The OL itself is irrelevant, imo. That was the thrust of the OP per my reading.

The debate that it is creating is an opportunity however. There is public attention and a sense of urgency. As you say, some may be exploiting this. What I am saying is that they are not the only ones who have this opportunity, atm.


How do you suppose we license the use of AI ?


Some states in America are proposing asking for age verification before looking at porn. I'm not sure how they propose to implement that and maintain privacy. Something like an AI license could be required before using advanced AI. Again, I don't know how that would be implemented, but I know a way could be devised. So in other words, people with a history of committing violent acts because it believed the AI was directing it's life wouldn't be able to use AI. License revoked. It's not that far fetched. It's all I've got for now, just a germ of an idea.


Yeah, we've definitely got ourselves into a tricky spot...


The cumulative impact on society of all of the violence that happens in the United States is likely lower than, for example, the invention and wide distribution of the smartphone. AI is likely to have a larger impact than that.

You're talking about only one small part of that violence.


Nobody is banning assault rifles? Aren't school shootings extremely rare in countries that do ban AR-15s?


School shootings are non existant in Australia but it's not directly related to banning semi-automatics.

In Australia 12 years can join gun clubs and own weapons when of age, there are classes of gun licence, if someone wants to work as a feral pig shooter they can get a semi automatic weapon.

The significant factors in Australia are that guns are regulated uniformly across all states, all sales either through dealers or privately are tracked, gun licenses can be revoked for improper handling, storage, domestic violence convictions, etc, and can be suspended for a period while allegations of particular nature are sorted out.

So, guns are treated as seriously as explosives, poisons, other dangerous goods, and they are efficiently tracked and databased (unlike the US with inconsistent regulation and no central electronic databses, etc).


Imagine Industrial Revolution, only it’s controlled by a single, for profit company.

That’s what’s scary about AI for me.


Yep. And the issue is that because AI will depend on some form of expensive ASIC, AI compute and knowledge storage will be heavily centralized, guarded and sooner than later we'll start seeing rentseeking behaviour once again.


OnException(createIssue().onIssueCreate(CallChatGPT("Patchbot", DefaultCommitPrompt, Source.GetWorkspaceHook("Newest")).commit("Fixing Issue ": exception.self).rethrowWith(new Version.push()));

is a abomination. How many of us here, have the abomination running? Step forth and confess..

To not design software, but let it grow with chatgpt on a user trial & error base, is a abomination. You would not push a Travel agency page online, that is created one error at a time.

It almost writes itself..


The set of possibilities for civilizational catastrophe can include both humans wielding AI unwisely and autonomous AI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: