> “This idea of surpassing human ability is silly because it’s made of human abilities.”
Shows the level of insight from this "guru". The truth is we don't know how far the work being done on artificial intelligence is going to go. For now it will continue to develop and acquire more and more autonomy, just because that is the nature of our existence: better and more efficient will replace the lesser so.
So, we may have potentially given birth to a new sentient being that will go on to live its own "life" (within 100, 500, 1000 years?), or we might be able to constrain it so it so that it will always be in the service of humans. We simply don't know at this stage, but my money is on the former TBH.
This quote is taken out of context and is perhaps not a charitable meaning of what the author means. Here's the whole paragraph:
> Lanier doesn’t even like the term artificial intelligence, objecting to the idea that it is actually intelligent, and that we could be in competition with it. “This idea of surpassing human ability is silly because it’s made of human abilities.” He says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
The author, Jaren Lanier, is a reasonably accomplished technologist, with some pretty groundbreaking work on VR in the 80s. He is most certainly aware that humans have been surpassed by computers in many ways. I think that line is arguing semantics about the word "intelligence" and clearly he knows that computers do many things far better than humans.
That clarification didn't do it for me, I found it was like juggling semantics. Let's rephrase his comparison: "It's like saying a robot can run faster than a human runner. Of course it can (soon), and yet we don't say that the robot has become a better runner". It's just nonsense.
If you built a bipedal (or possibly N-pedal) robot that moved roughly similarly to how humans or dogs or cats or horses run, and it was faster than humans over all the terrains that humans can run over, I'm absolutely certain that everyone would agree that the robot is a better runner.
But a car is not that thing. Neither is a helicopter, or a train, or a bicycle, or a jet aircraft or a hang glider or a skateboard.
A tractor is not better than humans at plowing, it is a plowing machine, so can do it at scale without suffering the same fatigue men experience, but it's not better at it, it simply does it mechanically in a way only a machine could do it.
Running and plowing are not simply about doing it as fast as possible or as extensively as possible.
So maybe what you are looking for is a definition of "better", it depends on what you mean.
In my book a tailor made suit is always better than a machine made suit, because people are better tailors than machines for some definition of better.
Yes, this is verily what I objected to. It's called "semantics", similar to when people say "hair" everyone knows what that means. But sooner or later someone will point ut that this hair is different from that hair and if you split one hair, now what do we have? This process is always a possibility in any discourse, but largely frowned upon, rightly so.
My opinion is that this it is not about semantics, it's about looking at the whole picture and not only to some specific outcome (running faster for example)
Firstly, faster doesn't necessarily means better.
Secondly, why do people run?
Nobody can't say for sure in general.
Why machines do it? (or would if they were able to)
This is not a sensible comparison. A mass-produced machine-made suit wasn't made using your exact measurements. If a human sat at a sewing machine on a factory production floor versus a machine, you wouldn't be able to tell the difference.
>“It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
That's a pointless argument. We might not say it, but for all intents and purposes the car does go faster than any human runner.
We just don't say it because running when it comes to humans mainly means using your feet. If it was a more generic term, like "fast-mover", we could still use it to compare humans and cars, and say cars are better "fast-movers" than humans.
No it's not pointless, language is important. Cars are not runners. "For all intents and purposes" is a cop out here. We're talking about LLMs, you know language learning models.
Not that important, and not for this purpose. Things still work the same, even in languages with widely different semantics and ways to refer to them (I don't mean the trivial case where a house is called talo in Finnish etc., but languages where semantics and terms differ.
Using language-specific (en. english specific, or german specific) word definition and etymology to prove some property of the thing reffered to is an old cheap philosophical trick that sounds more profound than it is insightful.
Even more so, we might not say it for a car, but if we've built a human-looking robot with legs, we'd very much say it's a "better runner" if it started surpassing humans at running. Hell, we used to call employees doing manual calculations "calculators" in the past. Later, when machines doing that became available, we used the same term for them.
So the idea that "human is runner but car is not runner", also means that "human is thinker, machine is not thinker", and this has some profound difference, doesn't make sense anyway. Human running is associated with legs, certain way of moving, etc. Thinking is more abstract and doesn't have such constraints.
>Cars are not runners.
That's just an accidental property of having a dedicated word for "runner" in English that doesn't also apply to a car going fast. The term "running" though is used for both a human running and a car going fast ("That car was running at 100mph").
>"For all intents and purposes" is a cop out here.
For all intents and purposes means "in practice". Any lexicographical or conceptual arguments don't matter if what happens in practice remains the same (e.g. whether we decide an AGI is a "thinker" or a "processor" or whatever, it will still be used for tasks that we do via thinking, it will still be able to come up with stuff like ideas and solutions that we come up via thinking, and effectively it will quak, look, and walk like a duck. The rest would be semantical games.
>We're talking about LLMs, you know language learning models.
Which is irrelevant.
LLMs being language learning models doesn't mean the language used to describe them (e..g "thinkers" or not) will change their effectiveness, what they're used for, or their ability to assist or harm us. It will just change how we refer to them.
Besides, AI in general can go way beyond LLMs and word predictors, eventually fully modelling human neural activity patterns and so on. So any argument that just applies to LLM doesn't cover AI in general or "the danger than AI destroys us" as per TFA.
That reminds me of the very old arguments that people can't program computers to play chess better than they themselves did. Obviously false, as is this. There is no reason we can't build something that is smarter than we are.
> “This idea of surpassing human ability is silly because it’s made of human abilities.”
It's not made OF human abilities, it's made BY human abilities - a completely different thing.
And, of course, Boston Dynamics will be delivering the "better runner" very soon.
"we don’t say that the car has become a better runner"
We would if the car was to race against human runners. It's just word play. Cars are not used like runners, so we use different words. They definitely are better runners.
Now that technology is touching our core business we get scared, but this has been going on for a long, long time. When it was our legs, we brush it off. But when it touches our ability to think we squirm.
Cars go faster than humans can by themselves, under some specific conditions.
Cars go slower than humans, or rather cannot go at all, under other specific conditions. Two weeks ago my wife ran 30 miles on trails in southern Texas. A car could not have traversed any of the distance she travelled on, because a car cannot run.
Cars make it easier for people to move themselves and stuff when there are appropriate roads to travel on. They have enhanced our abilities to do this, but they cannot run.
You're squashing the meaning out words by trying to suggest that "running" is somehow equivalent to "any other method of a person moving from A to B". But that's not true.
We can acknowledge the greater ease of cars for moving people and stuff without squashing the meaning out of words.
Finally, even the notion that cars are "better" at moving people and stuff needs careful examination. Thus far I have said "make it easier" because I am aware that by a certain set of metrics (related to energy use, material use, impact on the environment) cars are actually worse most of the time.
>You're squashing the meaning out words by trying to suggest that "running" is somehow equivalent to "any other method of a person moving from A to B". But that's not true.
That's just an accidental property of the english language.
We can imagine a language where "runner" and "thing that moves from A to B fast" used the same term T, and if people referred to T with the english notion of "runner" (e.g. a person running in a marathon") it was just deduced from the context. There are many cases like that.
In any case, the point is moot, as "thinking" doesn't have the same constraints. We might not call what a car does as running/runner (though we do use the former term) but we absolutely have considered AI as "thinking" and called AIs "thinking machines", even before AI (never mind AGI) even existed.
>You're squashing the meaning out words by trying to suggest that "running" is somehow equivalent to "any other method of a person moving from A to B". But that's not true.
This depends on the level of abstraction of the discussion. At some level of abstraction it's irrelevant if the move happened via running or via horse buggy or via a car. Sometimes we just care about the act of moving from A to B, and different methods to do so are only differentiated by their speed or other effectiveness.
In that case, we can compare man and machine though, and just care for the speed (machine can answer in 0.1. secs, a man needs to think over 1-2 minutes to answer such questions) or effectiveness (e.g. machine is better at juggling many things at the same time when thinking, or man is better at subtle semantical nuance).
Are car parts car parts? Not according to an auto-mechanics, but according to the laymen. A radiator is not a battery or an engine. Are games games? Not according to a game theorist, but according to the the laymen. A game is not a play or a history.
This isn't an accident of language. An example of an actual accident of language would be giving tanks instead of giving thanks.
Are runners runners? Yes, according to you. A walker is a runner is a missile is a bowling ball rolling between places is light moving through a medium. No, according to a fitness coach, because a runner is not a tank is not a plane. When they say that a person should take up running they don't mean the person should melt down their body in a furnace and sprinkle their atoms into metal which is then pressed into iron plates that are attached to a tank which will then go running.
Sometimes we need to be careful in language. For example, we probably don't want to confuse the process of being incinerated and pressed into iron plates with the process of a human exercising their muscles. The choice to be careful in this way is not an accident of language. It is a very deliberate thing when, for example, John Von Nuemann carefully explains why he thinks the laymen use of the word game has perilous impact on our ability to think about the field of game theory which he starts in his book about the same.
I think you should make your point so as to disprove Nuemann, not pick on the straw man of running. Or you should argue against the use of the term radiator instead of car parts. It will better highlight your fallacy, because with running I have to make your position seem much more farcial then it is. We do gain something from thinking imprecisely. We gain speed. That can really get our thoughts running, so long as we don't trip up, but it calls to attention that when someone chooses to stop running due to the claim that the terrain isn't runnable, the correct response is not to tell them that running is accidental property. It is to be careful as you move over the more complicated terrain. Otherwise you might be incinerating yourself without noticing your error.
>This isn't an accident of language. An example of an actual accident of language would be giving tanks instead of giving thanks.
By "Accident of language" I don't mean "slip of the tongue" or "mistake when speaking".
I mean that kind words we use to describe someome who runs as "runner" is an accidental, not essential, property of English, and can be different in other languages. It doesn't represent some deeper truth, other than being a reflection of the historical development of the English vocabulary. I mean it's contigent in the sense it's used in philosophy as: "not logically necessary"
Not just in its sounds (which are obviously accidental, different languages can have different sounds for a word of the same meaning), but also in its semantics and use, e.g. how we don't call a car a "runner".
That we don't call it that doesn't express some fundamental truth, it's just how English ended up. Other languages can very well call both a car and a running man the same thing, and even if they don't for this particular case, they do have such differences between them for all kinds of terms.
>* I think you should make your point so as to disprove Nuemann, not pick on the straw man of running.*
I'm not here to disprove Neumann. I'm here to point that Lanier's argument based on the use of "runner" doesn't contribute anything.
You are arguing on the basis of possibility of imprecision in language that the choice to be more precise does not contribute anything. That structure - whether you want it to or not - as a direct consequence of logic applies to every thinker who ever argued for precision due to the possibility of ambiguity. It is an argument against formal systems, programming languages, measurement, and more. Some of the time it will turn out that your conclusion was true. Other times it will not. So the argument structure itself is invalid. Your conclusions do not follow from your premises.
Try your blade - your argument structure - against steel rather than straw. I saw you slice through straw with it. So I picked up the blade after you set it down and tried to slice it through steel. The blade failed to do so. The blade is cheap, prone to shattering, and unsuited for use in a serious contest between ideas.
For what it is worth - I do happen to agree with you that Lanier is making a mistake here. I think it is in the logical equivalence mismatch. He wants intelligence to be comparable to running, not to motion more generally, but since intelligence is actually more comparable to compression we can talk of different implementations of the process using terms like artificial or natural intelligence without being fallacious for much the same reason we can talk about different compression algorithms and still be talking about compression. So instead of trying to argue from his distinction between motion in general and motion in humans, I would think the place to point to for contradiction is the existence of cheetah runners versus human runners. Directly contradicting his insinuation is that we actually do say that cheetah are faster runners than humans.
Cars are an easier method to move people and stuff when there are suitable routes, where easier means "the journey will take less time, will require almost no human exertion by those moved, and will likely include weather protection".
Nobody is going to disagree with this (they may raise the objections I did that cars are energetically, materially and environmentally less efficient than other means, but that doesn't invalidate "cars are easier for moving people+stuff").
But that's not running. I will concede that even in English, there are idioms like "Can you run me to town?" meaning "Can you drive me to town?", or "I'm just going to run to the store" meaning "I'm going take a short journey to the store". But this doesn't mean that cars are better at running than humans, it means that the english word run can be used in different ways. And you know exactly which way Lanier meant it.
> But when it touches our ability to think we squirm.
I think that's not the point. We're in awe by the machines' performances and then confused in how that compares to our abilities.
The actual threat is that in our minds we narrow our own capabilities and limit the comparison such that the computer is in fact better.
When computers were first doing math quicker than humans, that might have touch some humans, sure. Similarly now that "AI"s produce convincing spam faster or photo realistic creative images — that hurts some, jobs or maybe a lot. But it doesn't come close to being "human" or "intelligent".
Quite the opposite, the point is that we are getting dumber by focusing on human traits that can be measured or emulated by machines.
I think another general problem is that metaphors are quietly forgotten. The notion that computers "think" is something of a metaphor, but it is a superficial one that cannot be taken seriously as a literal claim.
For example, when we say computers can "do math" more quickly than human beings can, this is fine as a matter of loose or figurative common speech. But strictly speaking, do computers actually do math? Do they actually compute? No, they don't. The computation we say a computer is doing is in the eye of the beholder. A better way to characterize what's happening is that human beings are _using_ computers _computationally_. That is, the physical artifacts we call computers participate in human acts as instruments, but _strictly speaking_, it makes about as much sense to say computers compute as it is to say that pencils write, hammers nail, vacuums cleaners clean, or cars drive. These things participate in the human act, but only as instrument. Whereas when human beings compute they are objectively computing, computation is not what computers are objectively doing (both Kripke and Searle make good arguments here). These artifacts only make sense in light of human intentions, as instruments of human intention and act.
Human writing can be viewed similarly. Objectively, we only have some pigment arranged on some material. No analysis of a piece of written text will ever divulge its signification. Indeed, no analysis of a piece of text will demonstrate that what is being analyzed is a piece of text! Text, and even that something is a piece of text, needs to be interpreted as text to function as text in the eye of the reader. But the semantic content of the text is objectively real. It just exists in the mind of the reader.
So we need to be careful because we can easily commit category mistakes by way of projection and confusion.
Cars don't run. And even if they did, or you tortured the definition to include rolling on fairly straight prepared paths as running, it is only better for specific definitions of better.
Cars are faster on reasonable traversable terrain. Are they more or less energy efficient? Under what circumstances? Do they self navigate the best path around obstacles? Better is really subjective.
And this applies to the large language models too. Just like calculators, they are going to do some things better, or maybe cheaper. But I've played with them trying to get them to write non-trivial programs, and they really do fail confidently. I suspect the amount of source code online means that any common problem has been included in the training data, and the LLM constitutes a program. So, at this point for programming, it's fancy Google. And that has value, but it is not intelligence.
I am not saying we (as a society) shouldn't be worried about these developments. Near as I can tell, they will mostly be used to further concentrate wealth among the few, and drive people apart because we already can't settle on a common set of (reasonably) objective facts about what is going on -- both problems are probably the same thing from different perspectives...
Yep. This whole argument hinges on the fact that the word “runner” in this context happens to be used almost exclusively to refer to humans. Rephrase it even slightly and it falls apart. We do say “cars can move faster than humans.” Likewise we do say “machines can lift weights better than a human,” but we don’t say “machines are better weightlifters” because that particular word “weightlifter” is coincidentally only used to refer to humans.
> We would if the car was to race against human runners. It's just word play. Cars are not used like runners, so we use different words. They definitely are better runners.
This tendency on HN to annihilate discussions by stating that, for instance, flying is the same as running because your feet also touch the ground at some point when flying (it happens only at take off and landing but it still counts as running, right ?) is really something. Stop torturing definitions, it makes Carmackgod sad and they randomly switch off a bit on the mainframe every time you do that.
It's the other way around. Focusing on walking and running not being good comparisons rather than making valid comparisons is a distraction.
Like a lot of the stuff being done with large models certainly isn't thinking, but they can clearly characterize sets of data in ways that an unassisted human can't.
until the machine needs to run or think and "characterize sets of data" won't make it.
being able to answer based on a probabilistic assumption is not that great in general, they do it fast on a frozen knowledge base, it can be useful, sometimes is surprisingly good, but not that great in general.
When I asked for the 3 best wood shops near me it replied with a shop that does not sell wood, a shop that does not exist and a broken website of a former now closed wood shop.
Now can an AI train another AI to become "smarter" than it is?
It can't.
Can an AI train another AI to become better at "characterize sets of data" than it is?
It can't.
An unassisted AI is as helpless as the unassisted person, but can't even rely on the intelligence of the species.
> When I asked for the 3 best wood shops near me it replied with a shop that does not sell wood, a shop that does not exist and a broken website of a former now closed wood shop.
It’s not a search engine, if you give it the necessary tools it can use a search engine for you to find these answers.
You unintentionally point out the flaw of this argument by rephrasing it to eliminate the word “runner.” That’s the only word here that coincidentally strongly implies humans. By rephrasing it to “run” you end up with an even more clearly incorrect statement. My car can run. It runs pretty good. Sometimes I let it run for a few minutes to warm up.
Walking and running are modes of movement. A car can move.
Focusing on the "how" feels like you'd arrive at "a calculator isn't as good at calculating as a human, because it doesn't do it the same way, it doesn't have a brain".
The hilarious thing is that we do say a car has power equivalent to that of, say, 887 horses, but when it’s about humans it suddenly becomes nonsensical to make a comparison.
> “This idea of surpassing human ability is silly because it’s made of human abilities.” He says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
The analogy to running is flawed because rolling and running are different types of locomotion.
It's not at all clear that computing and thinking are meaningfully different forms of information processing. In fact, we know that we can compute by thinking since I can reduce lambda calculus terms in my head. We also know computers can compute all computable functions, and we know that all physical systems like the brain necessarily contain finite information (per the Bekenstein Bound), therefore they can in principle by simulated by a computable function. There are therefore strong reasons to suspect an underlying equivalency that would suggest that "artificial intelligence" is a sensible term.
We don't say it because we don't care. Machines moving faster than a human runner have not posed a threat to any industry or jobs in our lifetime. It's a silly comparison. I bet you there was someone at one point who was unhappy that a machine was a better or faster welder than them though. At least that person may have had the opportunity to keep working at the factory alongside the welding machine, doing QA and repairs. Most knowledge workers will not get to switch to that kind of replacement job vis-à-vis AIs.
Beyond explaining what the author meant, and also the hype and hypotheticals which are rampant, this is a valid concern which I also share personally. This is more imminent than “AI overlords ruling us” and I am afraid the motivation behind creating this particular system, is to bring on the automation (the creators don’t even hide this). Therefore I think the point you are making is actually important too.
It's not about the semantics of the sentence he said. This is obvious. He is pointing out a difference in nature of the attributes/properties of a human and a human creation. Not about something being more or less dangerous. He is trying to tell the resporter, or perhapes the reader, that they're asking the wrong question.
> This idea of surpassing human ability is silly because it’s made of human abilities
At some point in history we were just "chimp abilities", so the argument would become "it's silly to imagine that something made of chimp abilities could surpass chimp abilities".
I'm with you on this. People in these chains seem to be looking at all the wrong metrics.
Single-mode LLMs are made of human abilities, but we're already going to multi-modal, though with what I would call rather limited interconnections. What does a LLM that takes language and mixes that with sensor data from the real world? You're no longer talking about human abilities, you're going beyond that.
> Lanier, 62, has worked alongside many of the web’s visionaries and power-brokers. He is both insider (he works at Microsoft as an interdisciplinary scientist
And his unique perspective on AI is all the more valuable (and courageous) considering that Microsoft, recently laid off their AI ethics team. It's super important we don't let human considerations fall by the wayside in this rush. The potential of AI is limitless, but so are the potential risks.
Even without autonomous enhancement of AI, the argument that "[the] idea of surpassing human ability is silly because it’s made of human abilities" is BS...
A theoritical AI which thinks like a person, but (due to computing power) can think through and evaluate 1,000,000 ideas the time it takes a person to think through 10 of them, it already has surpassed human ability by a big margin. Same for memory capacity etc.
That the input the machine is trained on is the output created by "human abilities" is irrelevant to whether it can surpass human ability.
I think the argument is more that they only work from past inputs, they interpret the world the way they are told to. It is not that 'AI' can do things humans can't (otherwise the argument fails for many technical things, like a car at speed).
If your bet is on the former, how does it create an entirely new, irrational thought?
Again, this seems like a weird argument. Not that long ago I was told AI would 'never' be able to perform some of the actions that LLMs are performing now. I have about zero faith in anyone that says anything along the lines of "AI won't be able to perform this human like action because..."
The AI's we are using now are nearly one dimensional when it comes to information. We are pretraining on text, and we're getting "human like" behavior out of them. They have tiny context windows when working on new problems. They have no connection to reality via other sensor information. They have no means of continuous learning. And yet we're already getting rather insane emergent behaviors from them.
What does multi-modal AI that can interact with the world and use that for training look like? What does continuous learning AI look like? What does a digital mind look like that has a context window far larger than the human mind ever could? One that input into a calculator faster than we can realize we've had a thought in the first place? One that's connected to sensory systems that span a globe?
But even if the first AGI does end up perfectly simulating a human (which seems somewhat unlikely), a human given the ability to think really fast and direct access to huge amounts of data without being slowed down by actually using their eyes to read and hands to type would still be dangerously powerful
Assuming they don't drown in the information overload and they don't take in any kind of garbage we also put out there.
We also have some pharmaceutical tricks to tweak up processing capabilities of the mind, so there's potentially no need to simulate.
The capabilities of the big ball of sentient goop have not been plumbed yet.
Now imagine a technology that could obviate the need for sleep or maybe make it useful and productive.
Almost certainly true, but there's a huge difference. We're the result of forces that have played out within an evolutionary process that has lasted for millions of years.
Current "machine learning"-style AI (even when it uses self-driven iteration, like the game playing systems) is the result of a few ideas across not much more than 100 years, and for the most part is far too heavily influenced by existing human conceptions of what is possible and how to do things.
Refer to my point on past inputs. If a human suddenly said to the machine "change of rules, now you have to play by these new rules" the AI suddenly gets immensely dumber and will apply useless solutions.
This no longer appears to be the case. Self-trained systems, that play themselves extremely rapidly and can even infer the rules by encountering just notices of illegal moves, are now commonplace.
How is that relevant? A human will also get immensely dumber. Of course a lot less then an AI right now. The point is AI absolutely can do things a human can't.
> it's like saying that machines can never be stronger than humans because they're built by humans.
Did you even read the article?
“This idea of surpassing human ability is silly because it’s made of human abilities.” He says comparing ourselves with AI is the equivalent of comparing ourselves with a car. “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
Jaron Lanier's point is much more interesting point in this context—though I felt that it was overall a brief quote near the introduction to capture attention, than the main argument of the article.
In fuller context, Lanier argues that software using AI won't make human sport or competition useless, because it will use different processes to achieve the same result—the same way that competitive running (or top-level chess, Go, or certain video games) will still happen, even if human inventions can beat the best human at the task.
For all these tasks, the software will take a different process for doing well at the task (e.g. a car doesn't "run," and a chess engine "thinks" differently than a human). In these activities, the process matters.
A different interpretation of the argument is then a bit more interesting. If Lanier is also saying that software using AI won't be better than humans at activities outside of competitions, I would disagree—though to be fair, I don't think this is his argument. For lots of work, the result matters more than the process. If someone wants to make a funny poem as a one-off joke in a story, the result may matter more than the process of production. And if a worker wants to summarize lots of short texts where speed is the most important factor, the result may also matter more than the process. In the same sense, it's still true that a car is usually better at letting humans travel over long distances for work than running, because the result matters more than the process.
We put far too great an emphasis on the human specifics of an activity. For most utilizations of running (delivering goods or information, hunting prey, etc.) the car, or helicopter, or airplane far exceed the human runner. This is poetic nonsense like "speed of thought". When Boston Robotics gets a robotic runner that sprints faster than a human, then what?
The ML systems are not made of human abilities. They are made of software processes. Jared is a smart and informed guy but that sentence is just nonsensical.
Right, sorry, I was directing my question at the "does it surpass human runners" train of thought. Obviously it won't feel a pounding heart, or a thrill of victory if it wins a race, or die of hypernatrimia during a marathon, so it won't surpass our specific cares. Not sure those make a significant difference in the arc of development.
It absolutely goes to the military with built-in weapons.
>Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
Jaron Lanier is being called "guru" by the article, but he's much more than that.
As a pioneer and intellectual he's been arguing about the commodization of human knowledge for a long time, he's not simply saying that "machines won't surpass humans" and it's not accurate to describe him as someone who would say something like that.
Please take the time to research what he's published over the last 4 decades.
Lanier is brilliant, but sadly there any many brilliant people who've long seen the shifting sands and set out to capitalize first, rather than strategically build a future we fleshbags would like to be in.
"AI" is not currently autonomous; its algorithms that do exactly what their creators tell them to do. They run on binary computers that only do exactly as they are told.
That’s not true, current machine learning algorithms involve no manual programming past the training and inference code and it’s extremely difficult to predict what they will do without just trying it and seeing.
I think this video is a nice introduction to the basic concept to how a computer can figure things out automatically without being manually programmed and without the creators understanding the “why”: https://youtu.be/qv6UVOQ0F44
ChatGPT is much more complicated than the AI in that video but it shows some of the basic concepts
LLMs generate text. They're built to generate text. That they generate some kind of textual output is entirely predictable. Same with image generators. They will generate some kind of image given a prompt. They're not Skynet.
That an AI will have some kind of output is obvious, it doesn’t mean that you can predict what that output will be. It’s like saying that you have solved physics by saying “something will happen”
I think the point he's trying to make is that AI does not have an independent Will. It lacks desires and the ability to operate in opposition to its programming. This makes it no different from any other tool we use to enhance our abilities.
Whether or not you can predict a tool's output is irrelevant. I can't predict the output of a random number generator, but that doesn't make it sentient.
This is not necessarily true, however, for example in reinforcement learning there is a lot of work on "intrinsic motivation", i.e., creating systems that set and pursue their own goals.
I think it should be possible to build a sentient AI, but it hasn't been done yet. What remains to be seen is whether our current techniques will be suitable for making that self-retraining process efficient, or if we'll need to find better math to use as the basis for it. Part of what makes the brain so useful is that it fits in our skull, and is fast enough to learn in real time.
But, either way, I think that's what's on the line for people who disagree about how to use the word "intelligence." They mean it as a synonym for sentience, and the people arguing against them are using it differently. Before we can evaluate the truth of an argument, we should first agree to use words the same way.
With LLMs you say “you want to do X” and voila, personality.
What is indeed missing from current implementations is continuous looping. Doing actions and taking stock of the results. I guess that’s kind of expensive right now. We’ll get there. I don’t see the fundamental problem.
To be fair, humans exist only because of a long chain of organisms that started with "DNA generates proteins." Granted, it took billions of years for that process to create humans, but it shows that what seems to be a constrained process can have wild outcomes when it feeds itself. And text commands are how these models are defined, trained, deployed, and used.
I mean, if I was paying for the power bill every month and had a limited amount of computing capacity, I wouldn't want my AI behaving like my teenage daughter busy daydreaming when I ask her to clean her room.
But I have no reason to believe this will always be the case. As these machines become more capable and our compute power grows, someone will give one a server cluster and some free time to 'think' on it's own.
Given that algorithms are "how to learn" and "show me what you infer", that's the same kind of overly reductionist view that you don't need to worry about being eaten by a tiger, because it's just a set of chemical reactions that merely follow the laws of quantum mechanics.
The tiger is dangerous because whether you consider it a sentient, intentional killing machine or a bunch of atoms, it exists and manipulates the same physical space that you do (indeed, as the tweeted image points out implicitly, it is only a tiger when you consider at the same sort of physical scale that we exist at).
Software, however, does not have this property. Ultimately it does exist as something in the physical world (voltages on gates, or whatever), but at that level it's equivalent to the "bunch of atoms" view. Software (by itself) does not operate in the physical space that we do, and so it cannot pose the same kind of threats to us as other physical systems do.
The question is therefore a lot more nuanced: what types of control (if any) can (a given piece of) software exert over the world in which we operate? This includes the abstract yet still large scale world of things like finance and record keeping, but it also obviously covers the physical space in which our bodies exist.
Right now, there is very (very) little software that exists as a sentient, intentional threat to us within that space. When and if software starts to be able to exert more force on that space, then the "it's just logic and gates and stuff" view will be inappropriate. For now, the main risk from software comes from what other humans will do with it, not what it will do to us (though smartphones do raise issues about even that).
Software has been killing people since at least Therac-25, so "sentience" is a red herring.
The idea of harm from the unemotional application of an unthinking and unfeeling set of rules, which is essentially what algorithms are, predates modern computing by some margin as it's the cliché that Kafka became famous for.
Yes, the software may be part of the apparatus of a cold unfeeling bureaucracy (private or state), but it is the decision of human beings to accept its output that causes the damage.
I should have probably dropped the term "sentience" - I agree it is not really relevant. I will need to think about examples like Therac-25. Not sure how that fits in my ontology right now.
When a software system says "this person must have their property foreclosed", it is following rules at several levels - electronics, code, business, legal. But ultimately, it is a human being that makes the choice to "apply" this "rule" i.e. to have consequences in the real world. The software itself cannot do that.
Thanks, that clears up which word we differ on: "apply".
With your usage, you are of course correct.
Given how often humans just do whatever they're told, I don't trust that this will prevent even a strict majority of possible bad real-world actions, but I would certainly agree that it will limit at least some of the bad real-world actions.
This is a flawed analogy, it certainly breaks down even in simple case of random number generation. Computers could use external source like minor heat changes for that.
> its algorithms that do exactly what their creators tell them to do
This is very much in doubt :)
> They run on binary computers that only do exactly as they are told.
This is true in first approximation. Every CPU instruction runs exactly as it is written, that is true. This is probably the interpretation of "only do exactly as they are told" to someone strictly technology minded. But even with much simpler systems the words "huh, that should have not happened", and "I wonder why it is doing that" are uttered frequently.
The interpretation most humans would attach to "only do exactly as they are told" is that the maker can predict what the code will do, and that is far from the truth.
After all if it so simple, why did the google engineers tell their computer to tell lies about the James Webb Space Telescope? Couldn't they just told it to only tell the truth?
I think the machine code–level understanding is what's important. We can, in theory, put a person in a Chinese Room–style scenario and have them manually perform the code, and it will generate the same outputs (It would probably take millions or billions of years, but it is true in principle). A major difference is that we created the machine and the code and, at least as low as the level of digital logic design, we understand and control its behavior. The person in the room has a human mind with thoughts and behaviors completely out of the program designers' control and unrelated to the program; if they want to, they can break out of the room and punch the operator. The "unpredictability" of the machine is still constrained by the fundamental capabilities we give to it, so it might generate surprising outputs but it can't do things like punch people or launch nukes unless we connect it to other systems that have those capabilities.
> A major difference is that we created the machine and the code and, at least as low as the level of digital logic design, we understand and control its behavior.
The moment the software gets to interact with the world, whether via robotics or handling a mouse button event or some other type of sensor, we no longer fully understand or control its behavior.
Pure computation (the dream of functional programming) is fully understandable and entirely predictable. When you add interaction, you add both randomness but also time - when something happens can lead to different outcomes - and this can rapidly cause predictability to spiral away from us.
One of my concerns is what happens when machines start making their own money. This could be possible with cryptocurrencies (another reason to loathe them.) Machines can do things online, make sex-working 3d-modelled chat-bots for instance, or do numerous other types of work, like things you see people do on Fivver. If machines start making their own money and deciding what to do with it, they could then pay humans to do things. At this point they are players in the economy with real power. This doesn't seem too far out of an idea to me.
It is very easily possible with normal currencies too. Obviously banks will need a human, or a legal entity to be the “owner” of the account but it is very easy to imagine someone hooking up an AI with an account to automate some business. Maybe initially it would involve a lot of handholding from a human, so the AI doesn’t have to learn to hustle from scratch, but if the money is flowing in and the AI is earning more money than it is spending it is easy to imagine that the human checks out and doesn’t double check ever single service or purchase the AI does.
Shows the level of insight from this "guru". The truth is we don't know how far the work being done on artificial intelligence is going to go. For now it will continue to develop and acquire more and more autonomy, just because that is the nature of our existence: better and more efficient will replace the lesser so.
So, we may have potentially given birth to a new sentient being that will go on to live its own "life" (within 100, 500, 1000 years?), or we might be able to constrain it so it so that it will always be in the service of humans. We simply don't know at this stage, but my money is on the former TBH.