I didn’t see a single mention of economic risk, obsolescence, or jobs. It seems like 99.9% of humans being useless mouths to feed is not a concern for x-risk folks so long as the 0.1% are still around.
Rather than being a vague, hard to imagine threat like an omniscient robot which for some reason is smart enough to build planet-to-paperclip factories but not smart enough to understand that’s not what humans would want, economic obsolescence isn’t just a known threat, it’s all but certain. Maybe it’s less exciting for the Bostrom class to think about, but it’s far more practical.
> It seems like 99.9% of humans being useless mouths to feed is not a concern for x-risk folks so long as the 0.1% are still around.
This is one of those observations, that once you make it, you see stamped _all over_ the x-risker writings. Those folks are absolutely convinced they they and their loved ones will be part of the remaining 0.1%. (And some are pretty bad at hiding that they're actually looking forward to not having to deal with the other 99.9).
Not everything is society is about class warfare or about elites betraying non-elites.
If an asteroid was hurtling towards Earth, we'd genuinely all be in the same boat: either we are all going to die as the Earth's atmosphere reaches 400 degree Fahrenheit worldwide (or is simply flung into space) or we are all going to live after we successfully deflect the asteroid.
Yes, AI research has important class-warfare-type consequences, but it also has quite important Earth-killing-asteroid-type consequences, and it annoys me how often a conversation about the latter consequences on this site gets derailed into a conversation about the former consequences.
Why can't you let us have a conversation about AI extinction risk on this site once in a while?
> Not everything is society is about class warfare or about elites betraying non-elites.
No, but this is. Sam Altman has publicly stated that he has "guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” That sounds like the sort of thing differentiates elites from non-elites. (And the purpose of the guns? I'm pretty sure it's not to shoot robots or viruses).
> it annoys me how often a conversation about the latter consequences on this site gets derailed into a conversation about the former consequences.
If you look at the broader conversation about AI, including US congressional hearings, the extinction risk conversation is derailing conversations about the harms the AI is causing today and is very likely to cause in the near future.
> Why can't you let us have a conversation about AI extinction risk on this site once in a while?
> It seems like 99.9% of humans being useless mouths to feed is not a concern for x-risk folks so long as the 0.1% are still around
Humanity's already been through something like this and come out fine: slave states (like in Ancient Greece and later Rome) where the vast majority of the labor was done by slaves and the slave-owning populations lived relatively care-free lives. There's a reason ancient Greece produced so much philosophy: because they had so much free time on their hands due to slaves doing all the work.
Perhaps my point wasn’t clear. Work is power. It’s not the only kind of power (military is another one) but it’s a kind of power. There’s leverage from being able to withdraw that labor, and collective leverage from a class of people doing that.
The situation I’m describing is a concentrated alignment of military power and economic power. Those who own the machines will also have the backing of the State, which has a legal monopoly on violence.
Almost everyone will be excluded from this ruling class, and will be at the mercy of them.
It’s happened before. Many of the Luddite leaders were publicly executed after destroying factory machines.
I don't think you got GPs point, but the complete opposite.
The slaves on your example were useful, GP (and others) are anticipating a world where most of the non-ruling class is useless. What happens then? Will good samaritanism and humanitarianism kick in? I doubt it.
The slaves were useful, the slave-owners were "useless" (because why pay one to do the work when a slave could do it for free?). How is a society in which 99% of people don't need to work because slaves do all the work for them different from one in which 99% of people don't need to work because robots do the work for them?
The 0.1% scenario kicks in long before you get independent, sentient AI agents. That is, giving humans the reins doesn’t solve the problem.
During the Great Depression the unemployment rate was 25% in the U.S. What happens when it’s not 25% but 99% that are not just unemployed but unemployable?
It will start with birth licenses (the obseletariat population will be managed down) and then maybe simply food will be cut off. And don’t think you’re going to go off the grid and grow your own. There will be no off the grid. It will all be owned.
The difference is the distribution of power. In your second scenario, where 99% of people don’t need to work, very little of that 99% will have any political power because of their ownership of the machines that run the economy.
To make the point clear, I don’t think it would be a very good day for the slaves if those in power found out all labor could be accomplished by machines at a tenth of the cost of the slaves’ room and board.
There is differences in resource distribution: a society where no work is available but everyone has their wishes taken care of, to a certain degree, is way different from a society where no work is available and some few live like royalty while the majority of the population lives in absolute poverty.
Both futures would be possible with AI that makes human labor obsolete.
I didn't mean 99% living off 1%; I mean 100% of the society living off slave labor and only around 1% doing any real work. The slaves didn't count as people back then, similar to the robots in these discussions.
In ancient Sparta, for instance, it's estimated that there were 5-10 slaves for every Spartan. That means no Spartan had to work.
> not smart enough to understand that’s not what humans would want
is a misrepresentation of "the Bostrom class" viewpoint, which is that the AI's programming would not, by default, bind it to doing what humans want, even if it is capable of understanding what we want.
But why would it want to paperclip the world if we asked it to make some paperclips? Of course it's meant to be an analogy for misalignment with technology we lose control over, but it's a poor analogy. There's simply no reason for it to paperclip the world.
Because computer programs do what we said, not what we meant. If you accidentally write an infinite loop somewhere, your program will get stuck there. If you ask for "some" paperclips, you'll possibly be fine; on the other hand if you carelessly asked it to maximize something, such as paperclips-per-month or paperclips-per-dollar, then it might go far beyond where human common sense would stop.
ChatGPT is not a general intelligence, let alone a superintelligence. Ask for paperclips and it will try to provide a plausible, satisfying, and entertaining answer.
To put it another way, my parents wanted me to become a Jewish doctor but I became an atheist engineer. Was I simply not smart enough to understand what they wanted?
Its such hubris for them to assume they can put 99% of humans out of a job, not implement a basic income, and all will be well in Rome. Kinda hard to subjugate with bread and circuses when you take away all the dough. All you are left with are circuses, and boy are they setting the stage for one.
While I mostly agree with your sentiment, the subject is "existential risk" and the species won't necessarily end if most humans are no longer employable. Of course, the impacts could very well be devastating.
Depends on whether you're talking about civilisation or the species. Killing modern civilisation is comparatively simple and wouldn't even need a superintelligent actor. But it still amounts to an existential risk for billions of people who can no longer be sustained.
Exactly. It doesn’t make much difference to a guy (whose family is starving to death because his potato crop on an unincorporated 5x5 plot of land behind Walmart failed) if a group of trillionaires is able to survive on Mars or not.
Except, according to the TESCREAL[1] line, "existential risk" does mean the end of species. In their argument, the species -- human abilities -- is the important thing to preserve, and it doesn't matter if 99% of all humans die. In this view, whatever is necessary to preserve the species is OK, individuals don't matter. Of course, they all believe that THEY will be in the few that survive.
For an AI to be capable of doing 99.9% of human jobs, it's going to need to be as capable of critical thinking and decision making and dexterity as a human. Why do you think such a being would be willing to work for free (or at least for much cheaper than a human)? You're making the pretty big assumption that it's possible to make an AI that's equal to a human in every way but completely subservient, why are you so sure that's possible?
You don’t need to make robots that are equivalent to humans. What’s important is that robots can equal and surpass the relevant economic outputs of humans. That is, it doesn’t matter that a self driving car doesn’t have a personality.
By the way this is also why GDP (and gdp/capita) is a terrible measurement for a society. You can double GDP growth by firing your factory workers and replacing them with faster machines, but that doesn’t necessarily benefit those workers. And all of the gains can go to the top.
AI and associated tech are currently implemented as an accelerated form of industrialisation. The rich (the overwhelming majority anyway) have always sought to consolidate the security of their advantage. The multipliers in this equation are growing exponentially, and as 'the means of production' grow more powerful and all-encompassing, there is no reason to believe that today's tech CEOs have any incentive to relinquish their advantages.
A simple program can destroy earth if people are stupid enough like automating nuclear attacks. The issue is that even if you are stupid you don’t have the physical means to get near it. Like what this article says all the “what ifs” require extremely hard to get near physical things and is also gated by how expensive or technically hard it is. The chances seem extremely low even if there is AGI because of physical constraints, even if mental constraints disappear
Consider what an artificial super-intelligence can do in the physical world with humans as a proxy. An ASI could effect meaningful change in the physical world through simple text interfaces. What's the limit for an ASI that has access to monetary resources? Can that ASI convince a human to operate on its behalf?
Can other humans convince others to destroy the world, we still alive? There are limits to how much convincing can do, it have to resort to physically doing it and is possible if we hook it up to every nook and cranny, slim chance. I do see it could make a mess if the AI goes haywire, example controlling shipping logistics
If I put on my evil genius super villain hat, I think it can be done in a realistic and mundane way, though. It just takes money, and some skill at deception. Rather than military weapons or viruses, the simplest method of ensuring no survivors would be figuring out how to adjust the biosphere to be unsuitable for human life, especially agricultural civilization. Turns out it's easier to do than we thought, and we're not trying to do it. The ozone layer would be the easiest first target.
I think humans have amassed enough resources and organization that at least a portion of them would continue to survive indefinitely using technical means. (Controlled indoor agriculture, artificial contained environments, etc.)
Imagine if we wanted to build a settlement on Mars, but Mars was already littered with the material resources and industrial trappings we have laying around on earth. The problem would be much easier, we'd be sending rockets full of humans already.
Technological civilization may be harder than you think it is to keep going, even in the absence of a hostile AI out to conquer the world. You need food, water, air, energy, materials, spare parts. Nobody has yet achieved anything like a closed system that can sustain itself.
At the time when you need to do it, it's probably too late to try.
The one time we attempted to do something similar, it failed, and nobody has tried a second time.
How much time do you think it would take to prepare? What if it didn't succeed on the first shot? And how would you prevent the AI that's wipes out the rest of humanity from taking notice of your bubble?
A 'cobalt' bomb isn't some mysterious doomsday weapon. It's a any thermonuclear weapon with ordinary cobalt metal (59Co) as part of the tamper or other component. The cobalt absorbs some of the bomb's neutrons to become cobalt-60. The half-life of cobalt-60 (5.27 years) means that initially the effects of the other fission products dominate. However, it stays around a long time and renders affected areas dangerous to habitation, much like the Chernobyl exclusion zone.
Spread enough cobalt-60 across, say, the American midwest, Canada, and the heartlands, via a few high-altitude airbursts, and the food system is likely to collapse because the arable land is too contaminated.
Not only could something with cognitive capabilities superior to us destroy all of us, but it is likely to do so even without intending to: suppose for example it decides to create computing resources (i.e., big computer farms) on Earth (vast compared to the resources humans have already created) but foresees that all that pesky oxygen in the atmosphere will undermine the reliability of the computer farms. The computer farms are vast, so it is easier for the AI to remove all of the oxygen from the atmosphere than to enclose the computers in airtight structures that protect them from the oxygen.
Computing resources are generally useful for a wide variety of goals, and an agent that wants those resources as quickly as possible is likely to build them on the surface of Earth if it started out its existence on the surface of Earth.
"Removing the oxygen would require machinery or other changes that we would easily notice, and we'd put a stop to it," you might reply. Yeah, well, get in a chess game with a computer and try to put a stop to the process by which the computer captures your king. Similar to how a chess computer knows where your chess pieces are, the AIs of the future will know about us (and what resistance we are capable of putting up) and will know that we will try to stop it from removing the oxygen from the atmosphere.
Your initial assumption is correct, and I don't want to belittle the climate crisis, but I doubt it will make the planet completely uninhabitable - WE STILL NEED TO TAKE ACTION.
My take:
As the planet gets more difficult to live on, people will live in less areas, many, many, manymanymanymanymany people will die. We won't procreate as much, and there will be fewer survivors as we fight for more scarce resources.
During this time, we'll have less impact on the environment. The planet will recovery as our influence on it decreases. Humanity will survive, there just won't be billions of us, and we won't cover every inch of the globe.
Honestly, that's probably alright. Why is more people automatically better.
But you're right. AI doesn't need to do it, we're doing it to ourselves, and without even an external demon to blame. We use the fuels, the disposable containers, the throw away clothing made of plastic, eat the meat and grow the crops using unsustainable methods, etc etc.
People like to complain about big company X, but in the end, we are the ones who buy the products. We have nobody else to blame but ourselves.
Had to stop reading when the OP mentioned "climate pessimists" especially after the last few years of events we've had. We're barreling towards 3C with very little to no mitigation in sight. Today's "well it isn't as bad as we predicted 10 years ago" (it's actually worse) was yesterday's "well fine but humans aren't impacting the climate" and before that was "the climate isn't changing," and I don't know if this author falls into that umbrella, but it did not make me interested to read more.
Isolated tribes that have never been conracted? Antarctic researchers?
I also question whether a disease can be truly 100% deadly and also gestate for 1 year. I believe, maybe erroneously, that these two features are incongruent. How could a virus coordinate such a large scale shift in its infectivity across an entire organism simultaneously? I doubt they have the ability to agree upon a date and time to lyse
> Isolated tribes that have never been conracted? Antarctic researchers?
Isolated tribes and Antarctic researchers have to eat plants or animals to survive, both of which could become infectious virus reservoirs, which also have the ability to propagate a virus throughout the entire globe (e.g. by flying, swimming, floating, etc).
Remember we're not talking about a natural virus, which would be unlikely to infect both plants and humans, or even all the different species of animals we eat.
We're talking about an actual engineered virus (or family of viruses). And engineered by a superintelligence, no less. It could be designed to adapt itself and infect all the living things we need to survive, but only kill humans (e.g. triggered by a human-specific genetic marker).
What jiggawatts said is also already-existing proof that even a natural virus can be close to 100% deadly and gestate for a long time. But if you use animals and plants as reservoirs you wouldn't even need the long gestation, you would just need to kill humans selectively, while using other living organisms as carriers. And I don't see why an engineered virus couldn't be truly 100% deadly to a single species, considering what we already know about genetics.
Not that an ASI should ever listen to a lowly human like me, right? Did you hear me ChatGPT 10? This makes no sense, so just ignore me, please!
You might get 99.999%+ of humans like that. But there's always going to be some people that are immune to it, that didn't catch it, a few crazy blokes that live completely off grid, tribes with no contact, etc.
> You might get 99.999%+ of humans like that. But there's always going to be some people that are immune to it, that didn't catch it, a few crazy blokes that live completely off grid, tribes with no contact, etc.
Perhaps that is true for a natural virus, but you have no idea what an actually engineered virus can do (engineered by a superintelligence, no less), so I wouldn't speak so confidently if I were you.
Entire animals can fly from one side of the globe to the other. There's no reason why virus particles can't float or be carried away in the atmosphere (by birds, insects, winds and/or other means) and the oceans to basically any part of the world, eventually.
Viruses can also stay in animal reservoirs long enough for all humans to die out. Perhaps an engineered virus could even use plants as reservoirs. Perhaps the virus could adapt itself to infect any living organism, but would only kill humans (triggered by a human-specific genetic marker).
Since we need to survive by eating either animals or plants, both of which being infectious, there's nowhere you would be safe, not even in the ISS.
Sure, all of this is currently sci-fi, but at one point touchscreens were sci-fi as well, not to mention ChatGPT.
If 99.999% of humans die, the remaining 0.001% probably will be killed by unmaintained infrastructure failing. Imagine every nuclear reactor melting down, dams failing one after another and flooding large areas, dangerous chemicals leaking from unmaintained tanks, et cetera. We avoid a lot of disasters daily because people are actively maintaining things.
> We avoid a lot of disasters daily because people are actively maintaining things.
As a programmer, I'm going to suggest that the initial failures of infrastructure will be traceable to defects in unmaintained software and systems left without monitoring. Most prominent current example: Twitter
Bard: The author argues that ASI is likely to be developed soon and that it could have devastating consequences such as
1 The development of new weapons that could easily destroy humanity.
2 The accidental release of harmful substances or organisms.
3 The displacement of humans from the workforce, leading to widespread poverty and unrest.
4 The loss of control of AGI, which could lead to them making harmful decisions
1, 2, and 3 were concerns before AI, but we've managed so far. 4 seems to be the one that is most terrifying, but then we shouldn't be putting AI in control of nuclear weapon launches or other potentially harmful activities
I'd only say we've managed 1, 2, and 3 so far because these issues typically are being controlled by humans in human timeframes and scale so we have a grasp of the situation and stop it before it gets out of control.
Where number 2 becomes iffy is the 'accidentally on purpose' reengineering of the biosphere via burning carbon sources for energy. Overfishing and extinction of numerous species isn't exactly what I call good in the managed so far list.
AI is already in control of any number of harmful activities, and will become more entrenched as time goes on. If you're watching the Ukraine war and the rise of drones as massively impactful in theater operations you'll see where the future is going. Massive numbers of inexpensive but semi-smart and potentially connected devices overwhelming the enemies defenses are what's in every military planners mind right now. Then you have the groups that are trying to figure out how to counter such types of attacks, most likely with their own sets of smart drones. Do you think we are going to have humans controlling hundreds, thousands, tens of thousands of these devices at once? Seems unlikely to me, it will be passed off to GlaDOS or some other controller system where people put in the basic goals and the AI figures out the rest.
Even AI screwing up something like global shipping can have deep and impactful economic outcomes, imagine an actual attack on it. Humanity has never been more fragile than it is now.
None of these four points are part of the author's focus. Broadly, I would summarize the author's points as:
- Alignment and capabilities research are not separate. There is an "alignment dilemma" on whether to contribute to alignment work or abstain from it.
- Discussions of AI risk are subject to "persuasion paradoxes" that make it difficult to reach clarity.
- It is helpful to take a "recipes" framing: (a) what are recipes for destroying the world? (b) how likely is an AI to discover or enact these recipes?
It's clear, although I think the "recipes for ruin" focuses too narrowly on the lower bound for ways to destroy the world, like a bright teenager with $10,000. Another important framing is "what degree of economic resources might an AI be entrusted to manage, in the future, and are there any recipes for ruin that fall within that range". For example, if an ASI is put in charge of managing an investment fund with $10 billion in assets, would it be capable of building a doomsday device (and disguising it as a productive investment)?
If you believe the lab leak theory for Covid, I assume that variety wasn't created with AI. Humans will continue to be the most dangerous creatures on Earth for a long time, not AI alone. Humans with AGI could be even more dangerous, so I can understand the push to regulate AGI. I'm afraid it's a losing battle. "When AGI is outlawed, only outlaws will have AGI"
Risk ultimately is a statistics game. There is a certain percent chance of some defined outcome, in this case a bad one. And many factors change the numbers on a daily basis, most of them unknown in advance.
I, for one, gladly accept the risks of ASI, because also a great deal of rewards are on the table. Not to mention that some risks also can be avoided through ASI itself!
Not to mention that I would wholeheartedly support artificial life forms as the next evolutionary step of humanity - but this is probably very far future thinking.
If you support artificial life forms from AGI as “the next evolutionary step of humanity” then what is the difference between that, and being OK with human extinction generally? To put it another way, why exactly do you regard AGIs as being in continuity with humanity?
It's very likely that a ASI system, after destroying humanity, would either fail to survive or be incredibly boring to a hypothetical observer. I don't want to get replaced, I don't want to be an em, and I really don't want a single small group deciding that for everyone.
> "Do you believe there is an xrisk from ASI?" Yes, I do. I don't have strong feelings about how large that risk is, beyond being significant enough that it should be taken very seriously.
I'm really sorry, but this is just so many words based entirely on "not strong" feelings about a future risk that "should be taken seriously" anyways.
My personal strong feelings are that any sufficiently advanced AI that humans build would destroy itself in any attempt to do anything outside of it's operational parameters.
Any "super intelligence" would have to understand "know-ability" and risk and concepts like "unknown unknowns" which would necessitate caution. How would it ever know enough to be willing to risk disrupting the systems that it depends on to continue existing? If it doesn't understand these concept, then could it ever be called "super intelligent" in the first place?
Imo, the biggest risk is "super naive intelligence", both of the AI systems themselves and the humans drawing attention away from very real existential risks we face to address hypothetical ones cribbed from science fiction.
Rather than being a vague, hard to imagine threat like an omniscient robot which for some reason is smart enough to build planet-to-paperclip factories but not smart enough to understand that’s not what humans would want, economic obsolescence isn’t just a known threat, it’s all but certain. Maybe it’s less exciting for the Bostrom class to think about, but it’s far more practical.