Am I the only one feeling the press + big tech narrative of "driverless vehicles = safety" is way ahead of itself? OK, "driver error is blamed for 94% of crashes", but what is the validation strategy to show where driverless cars can quantifiably improve? In 2016, Philip Koopman of CMU discusses that there are non-trivial engineering challenges of achieving safety in NHTSA Level 4 vehicle automation: https://users.ece.cmu.edu/~koopman/pubs/koopman16_sae_autono...
If we are building the safest transportation system, what role do driverless vehicles play? Wouldn't that be the narrative that actually saves the most lives?
Thats not the argument at all. In the attached paper, the author suggested validating driverless vehicle safety within constrained environments, like using beacons on roads.
You make an argument like you have a hammer in your hand and you are looking at a row of nails. I'm suggesting it is short-sighted to look at the problem from only that lens.
For instance, with a more holistic lens you might explore the right place for humans drivers vs autonomous vehicles. You might simulate entire cities from the ground up, specifically optimized for transportation. What if the real breakthrough in transportation systems actually comes from the ability to quickly construct / deconstruct roads? Or dynamic city zoning? Or any number of non-vehicle innovations? What if we were actually optimizing for safety in transportation and not just trying to compare machine ability to human ability in limited contexts?
The problem though is that humans can adapt, and the AI in general can't. A go AI can play Go perfectly, but if a player overturns a board, it can do nothing to right this situation and resume playing. It's just following a list made for it, albeit a fairly substantial list.
The worry is this, the cars will not fail like a human will. It may even be dangerous The car may not even be able to move in certain situations (like another car blocking them in front either purposefully or accidentally). Or it may not be able to even see certain conditions a human could.
>It's just following a list made for it, albeit a fairly substantial list.
You're operating on a completely archaic understanding of AI. Modern systems can and do adapt to situations on-the-fly. What's more, they can adapt on aggregate - the experiences gathered by a single car can be shared to all cars. Very quickly, every car in the fleet will have billions of hours of cumulative driving experience.
The overwhelming majority of motor vehicle accidents aren't weird and unpredictable edge cases. They're tragic but mundane events that result from a handful of root causes - inattention, excess speed, poor judgement and unnecessary risk-taking. "Driver/rider failed to look properly" is the key contributory factor in nearly half of road traffic accidents. Computers utterly dominate humans in this respect. For every accident caused by some bizarre and unpredictable set of circumstances, there are thousands caused by someone doing something obviously stupid.
A computer can be programmed to be ultra-cautious in difficult situations. A computer can maintain 100% vigilance 100% of the time. Humans can't. Self-driving cars will undoubtedly fail in new and unexpected ways, but it's abundantly clear that they'll fail much less often than humans.
I don't think so. I think when you introduce computers to anything, you add more edge cases, not less. They are just different ones though. And I'm also skeptical about the quality of this software once it hits an actual mass-market. People here are assuming high levels of rigor when for all we know, it could be similar to Android in the long term.
>I don't think so. I think when you introduce computers to anything, you add more edge cases, not less.
That's sort of my point. Accidents are rarely caused by edge cases, but by predictable human failures. All the information needed to avoid the typical accident was plainly available, but the driver just didn't see it, process it or react to it appropriately. Driving is generally very predictable, but human attention and perception is hugely fallible. An AI that gets the basics consistently right but occasionally freaks out in an unpredictable situation would be a huge improvement over human drivers.
One potential flaw here is that our current driving experience is based on humans. Perhaps humans are relatively good at handling unexpected situations, meaning they don't cause many accidents. Isn't it possible that by switching to AI we shift from mundane accidents to unexpected situation accidents?
The main approach of how AlphaGo went around the issue of never seeing the positions and playing good moves is search (monte carlo tree search).
In the same way, if the world is modelled accurately and succintly, can a car predict the various scenarios and adapt, even if the current sitation was not seen before (extremely high speeds or unusual number of cars, etc.)
This. When these systems fail they will fail in surprising and inhuman ways. The accidents will appear to average people as easily avoidable which will give the impression of low quality engineering. e.g. Who doesn't slow down when a semi crosses their path on the highway?
Here's a good example. Self driving car side swipes a bus at ~25 mph. You can see the driver of the bus throw his hands up in the air in disbelief: https://www.youtube.com/watch?v=I9T6LkNm-5w
From a human's perspective it's a stupid mistake. It seems stupid because computers think differently than people do. On the other hand a computer would probably think a human were stupid if it saw a human make a mistake when calculating the logarithms of every number between one and a million.
When I was a kid, we used to have to replace chess pieces we lost while playing during lunch break with ersatz ones...pennies, nickels, etc. I'd like to see Deep Mind try to tell a game from that.
This is not rain. This is a drizzle on a very well maintained, center divided freeway, where the radar and ultrasonic sensor has a very nice reference (divider). Also with cars tracked in front. Example of (non-autonomous) driving in a real rain here for reference: https://www.youtube.com/watch?v=L3xKT98a3og
Because there was almost no rain in the beginning I jumped forward a bit and directly hit a point where the autosteer shuts off (a bit before the 10 minute mark)
You seem to be under some delusion that a vehicle purchased in the year 2017 doesn't have any software in it, and isn't fully electronically actuated (steering, brakes, throttle)
They sell these features as "parking assist" or "smart cruise control" or similar, but don't kid yourself into thinking that there's not a very sophisticated software system that could, if hacked, take over control. Has this ever happened? Have there been catastrophic bugs documented?
More advanced vehicles (i.e. Tesla) have enough hardware to be be nearly fully autonomous today. (i.e. they could rend control from a driver and continue to pilot the vehicle to an alternate destination if hacked)
> Lack of software bugs due to careless programming
This is very much not true of humans.
> Vastly decreased likelihood of being tricked by vandals exploiting flaws in recognition algorithms
This is not particularly true of humans. The human eye is very easily tricked. It's true that there are things that would not fool us and would fool a computer, but the inverse is also true.
Quick, what's the reaction time for Waymo's driverless cars? Support your answer with evidence.
For extra credit: describe what the reaction time is for Waymo's driverless cars are during various points of the garbage collection cycle.
People here have a tendency to imagine an idealized version of driverless cars and assume that actual driverless cars are the idealized versions of those cars, and then compare the idealized cars to very non-idealized humans.
I think that the real world is going to be more complicated than that.
It is easy to write code that takes a non-trivial amount of time to process to the point of output on any given hardware. I imagine that all of us have done that at various times. We can certainly imagine that reaction time could be a virtue of driverless cars -- even 99th percentile reaction time. Given sufficient hardware (and not just hardware on the core CPU/ram chips -- this is hardware out to the sensors and the subprocessors associated with each of them).
Similarly, we could imagine putting enough sensors on the vehicle that it truly has 360 degree awareness and visibility. But we do see real cars in the real world that have surprising blindspots.
There are cost tradeoffs to all of these things, and complexity tradeoffs. Is there a happy medium? I think that there very likely is, even without any fundamental advances in the state of the art. Are we within a year or two of that happy medium? Maybe.
>Quick, what's the reaction time for Waymo's driverless cars? Support your answer with evidence.
>For extra credit: describe what the reaction time is for Waymo's driverless cars are during various points of the garbage collection cycle.
Aight, I'm not familiar with Waymo's cars specifically, but with <10K in hardware (maybe <5K now) you can design and build an autonomous vehicle kit that has a reaction loop that runs at a soft floor of 15 adjustments per second. I'm sure that with better hardware you can increase that to 60+, and 15 per second is already about 5x better than human reaction time, and fast enough that it can react to changes every 6 feet at highway speeds, and every foot at neighborhood speeds. A 60 aps loop means that its recalculating everything every 2 feet your car travels on the highway.
And yes, this isn't incredibly difficult.
For reference, in my experience, running 2 cameras and a low power lidar (along with associated outputs to hardware actuators and such) off of a 2013 (or maybe a 2011) macbook pro, the limiting factor was always the lidar's 15fps framerate.
You've asked an interesting question. I can't find a good source on what is the range of reaction times for autonomous vehicles. However, in the test videos I've seen, they appear to be human-level or better. For reference, human reaction times (to start braking) are typically between .5-2 seconds. It's difficult for humans to react to anything in under .25 seconds.
I don't think it's controversial to say that machines can easily beat those human reaction times.
Most or all autonomous vehicle software is written in C++, so garbage collection shouldn't be an issue. Of course, these programs need to be tested carefully and extremely before we trust them.
Don't you think the engineers building self-driving cars would consider reaction times and make adequate safeguards? If your thesis requires a group of very smart people to be completely oblivious to a very basic issue for multiple years, then you should reconsider your thesis.
> we could imagine putting enough sensors on the vehicle that it truly has 360 degree awareness and visibility. But we do see real cars in the real world that have surprising blindspots.
Not sure I see the point you are making... all cars have blind spots with drivers now. Human drivers are always blind to the spots they are not looking at the moment anyway. I can imagine cars being much safer than they are now.
But when you look at how you imagine driverless cars to be, you might want to temper your imagination with how driverless cars actually behave, such as the Tesla running into a trailer because it was too high off the ground for its ultrasound sensors to detect.
(Note that this is not the famous case of the Tesla being unable to detect a white semi against a bright sky, it's a low speed no-injuries collision).
Obviously, Tesla is not the be-all and end-all of driverless cars. But when you're dealing with sensors combined with AI where each is in many ways less capable than the human eye, with different failure modes, it gets expensive and difficult to design really good fields of vision.
Imagine it's April 2018 and Waymo releases this statement:
"In the past year our autonomous cars have driven a total of 50 million miles in real-world conditions. Over that period there were 4 collisions, plus an estimated 12 which would have occurred if our trained staff in the drivers seat had not intervened. Insurance analysts estimate that if human drivers had driven under similar conditions there would have been 51 collisions."
Would that be sufficient for you to support the introduction of Level 4 automation?
Is there a standard for "real-world conditions"? I could easily see Waymo driving their cars up and down the highway near their headquarters for 50 million miles and calling that "real world".
That depends on the nature of the collisions. Are those 4 fatal accidents vs 51 fender benders? The devil is in the details, not the executive summary.
Doesn't change my opinion much because I am not a domain expert in self-driving cars.
All this would do is make me question the nature of the data collected. For instance, is that 50 million miles of diverse roads and weather conditions or the same mile over and over again on a sunny day? Or were the human-driven cars equipped with contemporary collision-avoidance equipment or a motley representative assortment of cars from the past half-century?
What would sway me is auto insurers offering a discount for relying on self-driving car technology. I can wait patiently for that to happen.
It is. We've seen some spectacular self-driving vehicle failures in the past while. Teslas running into half-opened garage doors, vehicles pulled over on the shoulder, the side of a semi, navigating roundablouts by driving into the ditch, driving on the wrong side of the road, yelling at the driver for staying in their lane...
No you are not! HN seems to think the same way, and it is one of the worst cases of rose colored glasses I have ever witnessed. What about the cost? Who is going to be able to afford this? Also, what about people that rely on a vehicle for work? Retrofit $75K work trucks with another $75K in self driving gizmos? Find me a practical plumber, carpenter, A/C, lock smith, self-employed guy/gal that will sign up for that!
Edit to add... Have the crazy expensive and downright hostile self-driving John Deere tractors eliminated farming deaths?
Serious question: why is cost such a common argument made here against emerging tech? This argument seems to come up every time some new expensive technology is transitioning closer to consumers' hands, and the current price, which is obviously expected to go down at scale as it does for nearly all widely useful technology, is used as evidence nobody will be able to afford it. I've seen this argument made for technology that is still in the R&D phase, for goodness sake!
Obviously, the assumption of all those involved is that the price will plummet as the technology becomes more adopted, so the goal is to get to a price point where the earliest adopters will be willing to pay. For self driving cars, talking about plumbers not buying this at the current price point is a bizarre argument.
Serious counter question; Why is nobody talking about cost? Cars are very expensive... is self-driving tech going to increase or decrease the cost of a vehicle? Do people generally want more freedom or less freedom choice?
Why does everyone assume the cost will decrease with scale? How many people can afford a Tesla right now? Let alone a fully autonomous car?
If flying were more affordable and less strict more people would own their own plane. It isn't and they don't. I see self-driving cars a lot like airplanes and John Deere tractors. That is, expensive proprietary tech that you have no control over. No thank you.
Why does everyone assume that most people want to share a vehicle?
"Serious counter question; Why is nobody talking about cost? Cars are very expensive... is self-driving tech going to increase or decrease the cost of a vehicle? Do people generally want more freedom or less freedom choice?"
Because you don't need to own the car - you could feasibly rent one on demand (like a taxi), since you can now rent a car without the necessary wage overhead of the taxi driver, or the minimum costs of car-rental (i.e. car rental typically is per-day, for logistical reasons) or the logistical problems of car-rental (i.e. you need to drive to and from the car-rental place, to pick up and drop off the car - and by "drive" I mean "get a lift or use public transport").
Plus, the self-driving stuff shouldn't be particularly expensive, and will be counterbalanced by lower insurance premiums, and likely tax incentives once voters realise that they really do save lives.
"Why does everyone assume the cost will decrease with scale? How many people can afford a Tesla right now? Let alone a fully autonomous car?"
Because just about everything decreases with scale, and it's not inherently hard tech. Tesla's main problem is their lack of scale. As are autonomous cars.
"Why does everyone assume that most people want to share a vehicle?"
Money. It's not "want", it's "can tolerate so as to save money". Your car, when adding together all the costs (purchase, fuel, rego, maintenance, interest payments if you borrow instead of buying outright) is one of the biggest expenses that most people have. Sharing a car between several people should decrease the costs by an order of magnitude, which could be used to buy whatever people spend their money on these days, or saved.
I live in the suburbs. I have a family with two grade school kids that participate in seasonal team sports and year-round gymnastics. They attend a charter school with no school bus system.
Let's say each ride cost $10... that is $20 to $40 per day just for the kids on most weekdays. Add in birthday parties, saturday games, doctor/dentist, trip to grandma/grandpa that live 7 miles away... and we are talking $1000 per month in "ride-sharing costs" for my children alone. We could add in my wife and I's costs, but it would be silly.
Maybe this service is less than $10 per ride, even at $5 per ride it is wayyyyyy more expensive than I currently spend on a 4 year old Toyota Highlander with low miles with insurance and fuel.
Who says you need to be the rider, and not the car owner? You could be making that $10 a ride renting your Toyota Highlander to other suburbanites when your kid's sports aren't in season? Or when you're on vacation, or at work for an 8 hour chunk? To say nothing of the convenience of a car that can drop off/pick up the little one from soccer practice on its own if you have a scheduling conflict.
This type of naive suggestion totally ignores the way that most middle-class American families actually use their cars not just as transportation but for storage and as a mobile base of operations. Please explain how I'm supposed to let others use my car as a short-term rental during the day while it's full of child seats, snacks, extra clothes, sports equipment, etc. Plus I don't want random people coughing / eating / having sex / using drugs in my nice clean cars. Outside of maybe a few affluent urban areas populated largely by young childless people it's just a silly unrealistic idea.
If you'd think instead of being naively condescending:
a.) Snacks and supplies in trunk - lockable containers inside. All could be accessible via app/key fob only.
b.) Who says the suburbanite your renting to won't want to use a car seat?
c.) Camera, ID, cleaning deposits, car self-drives to detailer on your lunch break.
$1,000 a month is a heck of a carrot to keep a car clean - even if that's not for you is your worldview really so small that you cannot imagine it being for someone not all that dissimilar from you?
No those suggestions are even more ridiculous and impractical. There's no way you're getting $1000 per month; you obviously just made that number up. SUVs/CUVs (which most people prefer now) don't have separate lockable trunks. I don't want other random kids in my child seats either: think puke and head lice. There are at least three totally different types / sizes of child seats anyway (rear facing, front facing, booster with or without back) and the straps have to be adjusted to fit the child so even if the renter also has kids the type and number of seats are unlikely to match what they legally need.
Just tons of drone cars picking up kids and delivering them to their parents. Owners of said cars collecting monthly AirBnB style rent from the people that can suddenly no longer afford a $3000 beater to get them around, but will happily pay $10 a ride in a self driving car. Oh yeah, don't forget about the vomit and drinks spilled all over the place in your ride share. Also sick people touching all the things. Where do I sign up? </sarcasm>
If the car you use is giving, say, 15 rides per day, then what you spend for 3 per day only needs to be 20 percent the cost of owning, fueling, and maintaining the car.
If the total cost of ownership is, say, $1000 per month, you might only pay $200, or about $2.20 per ride.
Your lifestyle is being subsidized by the negative externalities your transportation causes that you don't pay for, so it's not a great argument against advancing technology, because your comfort only matters to you.
We've already seen price drops for the most expensive self-driving components. LIDARs, GPUs and storage have already fallen in cost, and the products aren't even on the market yet. There is every reason to believe these price drops will continue as more resources are put into scaling production and more companies enter the game of component design and production.
> There is every reason to believe these price drops will continue as more resources are put into scaling production and more companies enter the game of component design and production.
Why? We literally never see that with top shelf products. This isn't an arduino solder job. This is going to be an integrated software and hardware package that will rival an aircraft autopilot/navigation. Please check you facts on the cost of these things.
I don't want walmart brand, everday low price components on my self-driving vehicle. I want top of the line if my life is in the balance.
How much does your 'top of the line' lane-staying sensor cost? How about your backup camera? How about automatic breaking sensors? How about all of the components required to integrate these sensors into your vehicle?
These are all things that started high and fell low. The historical prices for each of these systems reflects this fact, as does their gradual movement through the available product line-up. They all started in top of the line vehicles that cost well north of the average person's yearly salary, and are steadily creeping into the lowliest models.
I see nothing in self-driving systems that make them special in this regard. They're already largely an amalgamation of commodity components whose prices are trending downwards.
You can bet on them staying a niche product forever, if you wish. I just think it flies in the face of historical evidence.
>Why? We literally never see that with top shelf products. This isn't an arduino solder job. This is going to be an integrated software and hardware package that will rival an aircraft autopilot/navigation. Please check you facts on the cost of these things.
Google claims that it dropped the price of lidar by a factor of 10x once it started producing them. That seems like a significant, factual, drop in cost.
The iPhone is how much to produce? Very important point you missed in what I wrote; Hardware and software package. A lidar without software is useless.
The price dropped from something ridiculous (~100K) to something manageable for a luxury car (~10K), and certainly reasonable as a cost offset by multiple consumers (like in a rideshare environment).
No... in addition to being prohibitively expensive, the other model being put forth to make it less expensive, ride-sharing, also has drawbacks which make the idea a non-starter for most Americans.
If ridesharing was awesome, we would see it now with luxury cars. We don't because it is stupid and will not reduce the price of the car dramatically enough to outweigh the negatives.
Airplane ownership and operations costs are probably higher than in the 90s, but the factors driving that are diverse, and I'm not sure they'd apply in any meaningful way to self-driving cars.
A lot of the regulation that drives the cost of planes up won't apply to cars because of the different economies of scale.
The most popular light aircraft in history is the Cessna 172, and since it was introduced in 1955, only 43,000 have been built. Ever.
By contrast, the most popular vehicle on the road today in the US is the Ford F-series pickup truck. Since being introduced in 1977, Ford has sold 26 million of them. So roughly 600x as many as Cessna has sold 172s over a period of time 20 years longer.
The US market alone was responsible for 17.5 million new cars and trucks being sold last year. Globally, it's approaching 100 million per year. Pretty incredible economies of scale available with that kind of market.
We still have $5000 personal computers in the '10s! Hell a new MacBook Pro with everything and some upgrades is $3499 to $4299... very few people can truly afford that.
That's not accounting for inflation - $5000 then was more like $10000 now. And a bare-bones minimum machine for running business software (i.e. no color graphics or floating point hardware) was still close to $2000 (more like $4000 in today's dollars). So, roughly speaking a maxed-out gaming rig today is about as affordable as a bare-minimum box in 1990, without even accounting for the massive difference in absolute power.
The late Bill Machrone coined Machrone's Law which stated that "the computer you want always costs $5,000." This hasn't really been true for 5-10 years. You can build a $5K computer but you have to really push it. But saying that the figure is still at least $2,500-$3,000 is probably fair.
Sure, but you can also build a perfectly serviceable computer for < $1000. Even $500 would (obviously) get you something far, far superior to that 90s machine.
> Do people generally want more freedom or less freedom choice?
I'd quite like the freedom to be able to commute to work, go out for drinks with people after work and come home again at the time of my choosing without relying upon someone else to cart me around, paid driver or otherwise.
It usually just comes down to the person being against something without really having any good argument for why they're against it, and that's the only thing they can think of.
In my opinion, the primary use case of self-diving cars aren't personal ownership. Society as a whole wastes a lot of resources when every household has a car that stands still 95% of the time. With self-driving cars, we can plausibly improve that.
We could significantly improve it now with more Zipcar type apps (at least in cities where people live close together) but the most people want guaranteed access to a vehicle at 8am and 5:30pm weekday situation isn't easily solved just by making them mobile when not in operation. Nor - at least in the short term - is people's tendency to use cars, and especially cars with no previous owners, as a status symbol (many people could save more of their income not buying new models and premium marques than allowing it to be used by someone else at weekends and some evenings, but they choose the car that takes up more of their disposable income all the same)
Agreed. I also think another big reason for personal car ownership is having kids. With the increasing ages/weights recommended and/or required by law for car seats, that can be a hassle if trying to use a ride sharing service. It increases difficulty with increased number of kids at varying ages. For those without kids, while it's not crazy difficult to install a car seat, it can still take upwards of 5-15 minutes (depending on the style of seat). This amount of time lost can be a big hindrance.
Yes, I know that Uber has started to tackle this problem, but as far as I know it only provides 1 car seat, and a car equipped this way may not always be nearby. Maybe this will be a completely solved problem soon, which would be great!
Lots of people are single. Lots of people's kids are grown. Lots of people could own one car for the kids stuff and car-share the second vehicle as needed. Even with private ownership for cases like children, the total number of cars on the road could decrease considerably.
I don't really buy this argument. Most of he stuff I own is sitting idle 95% of the time (except for computer I guess ;) and it is similar with other people I know. The outcome will depend on many factors, including cost of the car, the secondary functions it will get and similar (will it become an extension of our living space?).
Maybe. There's an equipment rental place about a 10 minute drive away and, while I have rented from there, it really needs to be a once every multiple years sort of thing for the economics to work out. Something I use even just a few times a year doesn't make sense.
That's why society needs rich people. They buy emerging tech first, then the tech drops in price as initial capital costs are recouped, manufacturing costs come down, and the tech improves.
Remember when LCD screens came out and very few people could afford them? We were complaining then too about how could people possibly afford this and who will buy it??!
1. A Tesla with full self-driving hardware can be had today for ~$75k. And the model 3 will cut that in half. And the model after that will be even less. Millions of people can afford this.
2. Who said anything about retrofitting old cars? As far as I know, nobody is planning to do this.
3. This isn't a requirement, and it won't happen all at once. The plumber with an old F150 will continue to drive as he always has until the cost/benefit makes sense for him to upgrade.
4. Self driving tractors move at 3mph, and I don't think farm-field traffic fatalities are the big driver behind automated farming. Different use case.
Additionally maximizing profit (and this may read as condescending but I do not intend for it to be so) does not necessarily mean maximizing price. Obviously if you are not selling at a loss you can potentially make more money with a cheaper price point on volume.
I do agree, however, that first adopters will likely pay a premium.
Eventually I imagine insurance companies will help subsidize these things too - if the price does not become trivial.
You seem to be arguing against technological progress in order to protect people. Most of us 'self-driving car' skeptics are just pointing out that the timelines presented in much of the press are wildly optimistic.
> Retrofit $75K work trucks with another $75K in self driving gizmos?
Absolutely they will. That's [at least] $60K worth of driver's salary/benefits costs you've just eliminated. $75K is cheap to eliminate the cost of a human employee.
I don't know where to start with this. The whole point of the truck is to get the workers! to the site. Said vehicle is also there to support the workers during the day with storage for tools and materials. SO no, the damn truck isn't going to replace a plumber or a/c tech. wow.
You said: "Also, what about people that rely on a vehicle for work?"
Most people who rely on a vehicle for work are local or long-haul trucking drivers, not local trade workers. The important work a plumber or a/c tech performs is separate from the driving. You're right, those people probably won't move to automated vehicles immediately.
But theoretically, in the future where they could have an automated van meet them at the job site with their tools? And return the tools to a secure storage facility afterwards? I don't see why they wouldn't go for that.
Sure people will buy self-driving cars. People want to own things for themself. If a car has a higher value with this feature even more people want to have a car. Consumers don't think economically.
I'm skeptical about the "no one will own" meme. That said, there's no reason cleaning would be any more of an issue than it is with Zipcar today. If you go to pickup a car and it needs cleaning, you refuse it and report the problem.
Then what? What if you have something pressing to get to? All these rideshare arguments fall flat on their face when it is anything other than "going out" without much of a timeline.
If improving safety is the goal then the driverless vehicle companies are going at it backwards. Instead of having the computer drive the car most of the time and telling the human to take over in challenging situations they should have the human drive all of the time with the computer constantly monitoring and occasionally overriding the human's control inputs when necessary. Some mainstream vehicles already do this to a limited extent by controlling throttle, brakes, and even steering to prevent crashes. Let's keep expanding on that foundation; it's far more achievable than a true autonomous vehicle and would save more lives in the medium term.
There is a successful precedent in aviation. The latest fly-by-wire flight control systems treat the pilot's inputs as merely suggestions and will modify them as necessary to prevent departure from controlled flight, midair collisions, flight into terrain, and overstressing the airframe.
> In 2016, Philip Koopman of CMU discusses that there are non-trivial engineering challenges of achieving safety in NHTSA Level 4 vehicle automation:
That seems like the most obvious statement I've read in the last year or so at least. I think the fact that Alphabet spent more than 10 years developing these self-driving cars already is a pretty strong indicator that there are non-trivial engineering challenges involved.
What is crucial is data. I am not talking about training road condition, Google has been doing this for years. What is missing is traffic data. I have given this argument before on self-driving. One night a maintenance took place on a busy intersection in Flushing, NY. Police officers were asked to guide traffic. This is a planned event, but driveless car does not know; it doesn't have this data until the car shows up at the scene.
Yes, map services like Google have been receiving some data from government in addition to collecting user's current location and user's generous feedback (e.g. Waze user) to determine best route. But we are no where near the condition we can know what's happening ahead of us. What about weather condition? On intersection who goes first?
A safe driverless vehicles should be able to communicate (check, for as long as the communication is stable), and government and cars will share feedback to other cars. This is a crowd-sourcing effort to make fully autonomous car possible on road. If we just learn as we go on the road, these driverless cars will not work well in very complex road condition.
Waaay ahead of itself. There are just some people who see a catastrophe around ever corner, despite all the progess we've made in the last century (so called, "catastrophists"). At the same time, wild optimism about how we're going to have fully-autonomous, Level 5 vehicles doing our errands in 2030.
The case that I have heard is that the decisions will be taken (or rather have been) by a team of software engineers with problem-fixing methods that are meaningfully more efficient at getting the rate of accidents down than someone promising to themselves they’ll drive sober and well rested “next time”.
In reality, the problems encountered by cars are fairly classic and predictably where computers can be very good at: computer vision, 3d projection and modeling, project patterns, control loops (stearing vs. drifting). The beginning was expectedly bad, but is it not a stretch to imagine those solvable, just like Moore’s law and expecting cheaper electronics with mass-market “makes sense”.
Imagine ransomware infecting your car that gives you 30 minutes to pay or you'll be rammed into a tree at 90mph. I don't care about any contrived and well-funded narratives about how unthinking, unfeeling, driverless cars will somehow save lives. No thanks, I'd rather take my chances continuing to share roads with human drivers who are careful because they fear their own death just as much as I do.
You're not looking at the whole picture here. What's your odds of being killed by a drunk/inattentive/inexperienced driver vs odds of being killed by randomware? Road fatalities are already the number 1 killer of some age groups, its estimated that over a million people worldwide die on the roads every year. Around 40,000 in the US last year. If we trade 40,000 road deaths for 10,000 ransomware deaths, it's a massive improvement.
Ignoring that, that ransomware wont' even exist for a number of reasons, the biggest being it wouldn't be profitable. Ransomware authors aren't going to get a payout, very few people are going to be able to buy Bitcoin in 30 minutes while unable to leave their car even if they wanted to. Not only that, it isn't remotely difficult to have a manual, physical, kill switch that physically stops the car in a life or death situation. Even if you didn't have a kill switch, you could simply call the police and they can lay down a spike strip.
>who are careful because they fear their own death just as much as I do.
LOL! That's a good one! Have you actually even been on the roads before!?! When I worked drive thru I had people who were so drunk they could hardly form a sentence come through my drive thru. That's just ONE example.
If you're going to actually get people to change their opinion you should really give your own position the most uncharitable phrasing.
> Road fatalities are already the number 1 killer of some age groups
You're hiding a weakness in your argument by not specifying the age groups. You're hiding a weakness in your argument by restricting to cross-sections of the population that support your argument. You're misleading your listener by hiding the fact that this statement isn't because traffic accidents are unusually high, but that other causes of death are unusually low.
> Around 40,000 in the US last year.
Careful using absolute numbers, using 40,000 which is a very small number compared to other kinds of deaths like hard disease (800,000 deaths last year) can be misleading because 40,000 sounds like a big number.
> its estimated that over a million people worldwide die on the roads every year
You're hiding the fact that the US has 4% of the world's automobile deaths but 20% of the world's cars. You're using aggregate data to smooth over deaths that are caused by dangerous driving environments, roads, or ineffective traffic laws and misleading your listener into assuming that they're caused by human error.
Total deaths is not a useful statistic. The deaths that could have been prevented by L4 autonomous cars but could have not been prevented by L1-L3 or more conventional safety practices is what you should be presenting.
Give as little surface area as possible to people who would use it to push back.
I agree that self driving cars have the potential to be much safer in the long run, but the concern that a single error could have outsized consequences is too serious to just be swept under the rug because "lol driving is already scary". How many drunk programmers are out there?
The ransomware scenario is serious, but merely opting out of a connected car yourself won't be enough. You can still get a text demanding payment or other cars on the road will ram into your manual.
As for sharing roads with others who fear death as much as I do. That's a wonderful idea, however in my case a pipe dream. I share the roads with people who have no problem speeding, driving under the influence or even refusing to use seatbelts.
Ultimately I'm more optimistic about solving the ransomware problem than I am about putting the fear of death into other humans.
The correct answer to that is very simple, and very expensive: just make correct software.
Such malware are only possible because the targeted system has vulnerabilities to begin with. One just has to ensure the absence of such vulnerabilities, possibly using machine checked proofs.
One obvious approach is to properly isolate the driving software and sensors from external input.
It's not ransomware i worry about. What about targeting the lidar/radar directly? Some people still have radar/laser jammers, and only ten states ban them. What on earth will your programming do to counter that?
It is likewise quite easy to throw a rock on the highway from a bridge. What you are really worrying about is new creative ways to kill and destroy.
Yes, hostile input will have to be taken into account. Fortunately, this is easily detectable (just look at the logs). And if people die as the result, it will count as murder. Finally, one does not simply jam sensors from across the planet. You need a physical presence on site, and that's riskier than a remote hack.
Self-driving cars can make it easier to do this systemically, and not even purposefully. The jamming sensors I meant that another car passes by with a radar jammer on, for its intended purpose (jamming police radar sensors for speeding) and it hits the self-driving car too.
A lot of the current driving issues do suck, but they still will suck with self-driving cars, as any computerized system has its own vulnerabilities. It's just they might be systemic and affect a lot more people in ways they can't compensate for.
Yeah, let's just ask the engineers who are solving one of the toughest robotics problems ever to rewrite the entire house of cards that is their technology stack without any vulnerabilities.
Of course even machine checked proofs will have trouble finding vulnerabilities that creep in during the specifications stage or exist in the realm of hardware.
How do you isolate your sensors from external input and expect to do anything?
It's not as if embedded systems work the same way as the jumbled mess of technology that web servers use. It's perfectly possible to firewall off core system functions into a separate network and separate individual processors for each function. Existing vehicles today have already a strict hardware firewall between the CAN bus[1] and the IVI[2] system to prevent bugs in the more vulnerable and less strictly designed IVI software from sending data that disrupts safety-critical systems.
> Yeah, let's just ask the engineers who are solving one of the toughest robotics problems ever to rewrite the entire house of cards that is their technology stack without any vulnerabilities.
I did say "very expensive". But yes, if we're serious about correctness, safety, and security, I don't see any other choice. We should scrap the crap and spend the $billions necessary to rebuild it right.
> Of course even machine checked proofs will have trouble finding vulnerabilities that creep in during the specifications stage or exist in the realm of hardware.
I'm no hardware specialist, but you can still prove the correctness of the design of such and such hardware. The actual chips can still fail the specifications, but that's easier to test once you know the design itself is correct.
Also, specifications themselves can be checked. Only the high-level properties must ultimately be decided and reviewed by hand. Not that they won't be complicated or numerous, but that's still much smaller and easier to deal with than the entire implementation.
> How do you isolate your sensors from external input and expect to do anything?
I wasn't talking about the normal sensor input, which of course can be tricked the same way human sensory input can (for instance with a big flash or something). I was thinking about sensor command, such as which way they should be oriented or something. Though I expect most sensors will have no such input, and will only output to the system.
> The correct answer to that is very simple, and very expensive: just make correct software.
I used to think that was unlikely. Then I had a few discussions with people about the state of C, and things that could be done to make it default to a slightly more deterministic case by changing how undefined behavior is dealt with in regards to optimization. Now I think it's impossible, because nobody is willing to give up even theoretical unknown performance increases for more security. At least for C. You would have to use something that's much more strict about behavior, like Ada with SPARK, or possibly Rust.
I'm willing to bet most the code already written for these projects is C or C++. Good luck getting that changed if so.
Forcing a change away from C/C++ is easy: regulation. If a software error (or vulnerability) kills anyone, the vendor is to stop all sales and recall all cars immediately. Sales can resume pending an evaluation period, and the evaluation is to be made by independent experts, at the expense of the vendor. (We should heavily imply that such an evaluation can take months, killing the company in the process.)
Correct software is expensive. But if we make sure incorrect software is even more expensive, we'll get correct software.
Once the industry is forced to get serious about correctness, they will move away from the C/C++ minefield real quick —or at least come up with safe ways of using C and C++.
> The correct answer to that is very simple, and very expensive: just make correct software
Correct software can be compromised via the update process, or social engineering. It's necessary, but not sufficient to ensure integrity.
A cheaper, correct option would be a mechanical lever/breaker labelled "manual override" for licensed drivers or "emergency stop" like they have on every industrial robot or heavy machinery since the 60's. Sometimes the answer to a software problem isn't more/better software.
There are no perfectly safe computers... even when air-gapped, they can get compromised. But one problem with modern cars is that they are connected whether you like it or not.
Since 1970 total airplane casualties have halved, while there are now 7 times the number of passengers. Much of this improvement has been due taking decisions away from pilots and into the hands of 'unfeeling' automated systems.
But commercial airliners were never intended for private ownership, right? I would say a significantly large proportion of the world is unable to own a large bus and these are quite manual still.
In all seriousness, ransomware that would kill someone won't be a remotely common thing - for one thing, it will be represented as "$COMPANY's lack of security KILLS PERSON X", and the bill for the ransom will need to be paid by them.
And then the cops/secret-service would get involved. If there's a death involved (and reason to believe more will occur in the future, which would be necessary for ransom as a prolonged business model), then they will come down on any ransomware (and responsible negligent parties, such as auto companies with shitty security) like a TON OF BRICKS.
Plus, there's still a lucrative malware model that goes like this: Infect the car, then quietly make it steal itself in the middle of the night (or at your moment of choice). Bam, you now have a stolen car that you can't be connected to, and nobody will notice for a while.
I think this is correct. Criminals aren't stupid. It'd be much safer (in terms of risk of authorities coming after them) to have ransomware where the car just refuses to go anywhere, or drives you to a far-away location, until you pay up.
People will pay a fair amount of money to avoid a major inconvenience, but "my car won't start" is a lot less likely to have the FBI beating down your door.
It can actually be a lot worse - entire countries will be held ransom when car networks gets compromised. Beyond disabling a countries transportation ability, a remote hacker could weaponize every car to attack passengers and pedestrians.
what is the validation strategy to show where driverless cars can quantifiably improve?
One big way in which automated drivers could improve traffic: By not being a jackass. I remember at one point driving down Westheimer, which is four lanes wide and one of the major thoroughfares of Houston, when I had to slow down for someone making a right turn to merge into traffic into a middle lane while their head was buried deep into the passenger side footwell looking through their stuff.
Then, there are the people who feel like they have to tailgate you within 6 feet during rush hour traffic on the highway. Also, the people who won't let you in for some personal justice you can't possibly understand.
It would only take a smallish fraction of cars implementing the "stay between" algorithm that CGP Grey mentions in his video to significantly improve traffic.
I've implemented this algorithm manually. (Much easier to do since I have the instant accelerator response of an electric car.) It does seem to improve traffic flow. Also, jackass tailgaters are sometimes confused by this, and decide to pass.
> what is the validation strategy to show where driverless cars can quantifiably improve?
That seems like an odd question to ask. There are huge reams of traffic safety data collected every year by the NTSB and others. Traffic accidents are one of the top two or three treatable public health issues in the modern world, and they get very significant public funding for their study.
Honestly I think the question has to be the reverse: what is is about "driverless" safety data that makes you think it won't be well-measured by the existing "validation" regime?
I feel like the question is not if "driverless vehicles = safety" but when. So, all other things being equal, the quicker we get to that point, the more lives we can save.
Obviously not all other things are equal, and potentially we could create greater loss of life by attempting to get self-driving cars on the road too quickly. But that is the nature of problem: how do we avoid moving too quickly while recognizing that the current danger that is human drivers should be removed from the equation as quickly as feasible.
Is Waymo clear about what the end goal is? I can't tell if they plan to launch an Uber like service, or a direct-to-consumer lease type service, or just to be suppliers (whole vehicles, components like lidar, software) to companies like Uber. Or maybe they are leaving all of that undecided for now?
I'm not sure this rider trial thing strongly signals any of them. You would want real world end-customer experiences regardless, I would think.
I wonder this about autonomous cars in general from any company. They're cool tech and in Europe, they can really help solve several last leg problems and cut down on car ownership (I really don't think people should be able to own self driving cars. They really need airplane style maintenance for sensors and companies need to work together so all fleets have the latest safety and security updates).
However I don't really like hearing about tax money going into these projects. At least in America, self driving cars won't solve gridlock. It will be 15 ~ 30 years before we can have autonomous car only highways, and even then, self driving cars don't even touch the capacity of a real train based mass transit network. Singapore has had self driving trains for years (it's a much easier problem) and London is automating more of their lines.
Self driving cars are cool tech, but they're not going to solve grid lock or many of the major transport problems we face today:
Self driving cars solve the last mile problem. You get off a train close to your destination and a cheap self driving taxi finishes the journey.
And you could replace the train with a self driving bus. It can carry multiple people efficiently for the majority of their journey. And unlike current buses or trains, it can automatically adapt it's route based on where the occupants want to go. The buses can cooperate with the taxis to bring people to and from convenient pick up points.
The main problem with gridlock is that there is no limit to the number of vehicles on the road. There needs to be an economic advantage to things like buses that transport multiple people. A high tax on using the road in the city might work, if it could be enforced.
Turn major streets into payroads that are free for self-driving vehicles. It becomes a small tax on drivers and will convince them to use self-driving vehicles to save money. Work with companies so that self-driving buses arrive more frequently but only follow specific routes - while self-driving cars arrive less frequently but can take you to anywhere you need to go. People who are more time-pressed will take the bus. This would require cooperation between competing businesses and the cities' government though. Or a government-allowed monopoly / made into a public utility.
The benefits would be:
1) Compounds traffic on minor roads, encouraging people to fund the city via payroad taxes or begin using self-driving vehicles to get away from the traffic, which is now worse, for "human drivers".
2) Gets human drivers away from self-driving vehicles on major roads, which should reduce accidents. Less accidents means less traffic and more throughput.
Note: I don't think this would be realistically implementable. The regulations alone would be a nightmare to try and get passed.
> I don't really like hearing about tax money going into these projects
It might be better to think of it as an investment in public health rather than public transportation. Preventing the huge number of deaths caused by human drivers is more important than speeding up traffic.
Trains? Trams? Good rail networks in cities would greatly reduce the amount of drunk driving. It's simple tech. It's been around for decades. It works, and Americans block it at every chance because ... I still don't understand why.
Right, I didn't ask if other options existed but if Americans would choose them. You seem to have answered that question.
I think one reason Americans don't is because they like autonomy and comfort and cars are more autonomous and comfortable than other modes of transportation.
The problem was caused by the underlying political problems - namely, people didn't necessarily want to ban it, there was just a very ruthless political organisation that essentially destroyed any politicians who opposed the bans, regardless of what the constituents wanted. Politicians were forced to pass it.
If the alcohol ban had arisen organically - namely, as a genuine result of democratic support, the prohibition wouldn't have been such a spectacular failure.
> If the alcohol ban had arisen organically - namely, as a genuine result of democratic support, the prohibition wouldn't have been such a spectacular failure.
isn't that begging the question? If everyone had supported it then everyone would have supported it? If you want to ban alcohol to save lives, what needs to be done is still the same...getting everyone to support it.
And we still have trouble with people using drugs today with popularly supported bans. War on Drugs, etc.
> if you take out alcohol and cell phones, you've reduced 70-80% of driving related fatalities with those two alone
I don't believe that's supported by data. A quick search finds alcohol involved in ~30% of all fatal crashes and cell phones involved in ~20% of all crashes (though probably significantly underreported). Even boosting the cell phone rate by quite a bit, there will still be considerable overlap between the two groups, so likely nowhere near 70% in total.
The root cause is that we're letting a bunch of primates steer fast moving, heavy metal objects around a highway (and really the fact that we easily distract ourselves and make bad decisions about drinking and driving is a great example of why the whole thing is generally a bad idea).
Haven't really thought about the safety and maintenance part of this before. If people own their self-driving cars I suppose there's also a chance they can opt-out of software updates, which would potentially be very dangerous. I do believe that the autonomous car future will have room for owned vehicles, but at this stage I don't think any of us really know how it will shake out.
> If people own their self-driving cars I suppose there's also a chance they can opt-out of software updates, which would potentially be very dangerous.
Auto insurance companies would probably have a clause saying that their policy is void if the vehicle isn't updated within X days of the software update becoming available and that would be the end of that.
Brings to mind the situation the SodaStream company is in in most countries: pressurized gas canisters aren't safe for untrained people to do maintenance on (e.g. refill) themselves, so it's mostly illegal to own them without a license. Thus, the company operates a 'club' that lends them to people, so that the people are never the legal property-owners of the bottles. But this allows the 'club' to be able to recall the bottle at any time, and since it's not yours, you have to comply.
I guess "DRM" for self driving cars is something people will complain about, but for vehicles that have controls it doesn't seem real horrible to have an autonomous system do a check to see if it is still licensed.
There could be an emergency override if the person that needed the vehicle didn't know how to drive, an override that automatically issued a fixit ticket.
The thing is that today the average age of cars on the road is something like ten years. Many people don't and many people can't afford to buy new vehicles. They may not have the money to deal with that fixit ticket which probably has to be handled by a licensed service station with a lot of fancy equipment.
So, if all this comes to pass, you'll end up with a system where the wealthy lease or ride-share their self-driving cars while the plebes are stuck with trying to keep their manually driven clunkers running.
However, if autonomous vehicles are just for the wealthy end of society whether directly or by way of limo and taxi services, they're not going to be particularly welcomed by everyone else (aka the majority of voters).
> I really don't think people should be able to own self driving cars. They really need airplane style maintenance for sensors and companies need to work together so all fleets have the latest safety and security updates
Couldn't this be solved as easily as you car telling you that while you're at work it will head over to the repair shop to get its sensors checked up?
Even if vomit isn't a common occurrence, there's wear and tear and empty soda cans and it means you can't leave stuff in your car.
The broader issue is that, if this sort of thing were to become commonplace someday, why wouldn't it just become a race to the bottom where your income/mile =~ operating costs per mile. (Or, even more likely as with Uber today for many people, operating costs per mile as calculated by a lot of people who don't really understand their full operating costs.)
Maybe fewer people will buy cars and more will use robo-taxis. But loaning out your car to compete against fleet operations does not sound like it would ever be a good deal.
IMHO, it may be a little short sighted to look at Waymo and self-driving cars by itself, I believe it fits into the broader DL-AI initiatives within Alphabet as yet another DL-AI application. Whether it becomes an end in itself or simply feeds into other DL-AI initiatives is most likely still undefined.
I think it's safe to assume that, while self-driving tech will have broad-reaching DL-AI applications for Alphabet, there is a plan to directly sell the self-driving tech somehow. Otherwise why create the brand Waymo? If it was simply left as experimental Alphabet/Google research I think you would have a stronger point.
I think they're still early enough in the process that they want to leave their options open. I'm sure they're going to be capturing a lot of information about how the riders, community, and press handle this first step — and use that to determine where they want to go from there.
"We're at the point when it's really important to find how real people, outside the Google environment, will use this technology," said John Krafcik, Waymo's chief executive officer. "Our goal is that they will use this for all their transportation needs."
It seems they are split on whether car ownership or ride-sharing will be more viable:
"Yes, self-driving technology makes sense for ride-sharing," said Krafcik, [...] "It also makes sense for personal car ownership." Transportation to and from transit hubs and logistics also made his list.
I have a lot of respect for Waymo's approach to this.
While Tesla and Uber have both just recklessly (imo) jumped in and started setting loose self driving cars and making bold claims, Google/Waymo has really taken a slow and measured approach and given great care to making sure their cars are actually safe.
Actually I think Waymo is too cautious. Tesla has thousands of customers effectively acting as test drivers for their autonomous software, which allows them to collect data at a much faster rate and discover more 'edge cases' than with a more cautious test environment.
33,000 people die each year on US roads and self driving cars offer the chance to dramatically reduce that figure over time. The more aggressively we can test self-driving software now, the faster the software can be improved.
So as long as the accident rate for autonomous Teslas is initially no more than for human drivers, the Tesla approach will lead to fewer deaths in the long run.
Tesla no longer sells cars with automatic driving capability. The hardware is there on newly shipping cars (although the sensor suite is weak) but it's not active. There's a lawsuit.[1]
It's unlikely that Tesla gets much useful data from vehicles. To debug a vision system, they'd need the camera data from all the cameras, and that's too big to upload over the cell phone data link.
They don't need all the vision data -- likely what they're doing is running their beta self-driving features in the background all the time, and only send data back (1) when the driver's input differs dramatically from what the self-driver would have done itself, presumably because the driver saw something their software missed, and/or (2) when a driver gets in a crash (at which point they can check if their algorithm would have prevented the accident).
They just need to send back data from a few seconds before either of those scenarios to quickly accumulate a giant library of one-in-a-million edge cases they can test future algorithm tweaks against. I think it's a reasonable strategy.
Thanks. I found that when they transitioned to Hardware2/AP2 they cut out most autopilot functionality, and since then have been gradually releasing updates that bring back that functionality.
If you believe that self driving cars will make roads much safer you should be for lots of testing and a measured approach. It only takes a few more high profile accidents before society turns against them.
Self driving cars also offer the chance to dramatically increase that figure. They have failure modes that do not exist with human drivers. Cautious is good.
Arguably. Tesla has not done a good job educating their customers on the limits of their "Autopilot" system, which has led to at least one preventable death.
Is it possible to extrapolate from the recorded reduction in accidents after the introduction of autopilot and work out if fewer people have died as a result of introducing when they did?
Here's an article which describes a 40% drop in the accident rate:
The NHTSA's figures show a drop from 1.3 crashes per million miles before Autosteer to 0.8 crashes after Autosteer.
Would any of those crashes have been fatal? I don't have the numbers to answer that but I posit that introducing Autopilot has prevented more deaths than it has caused.
The utilitarian approach is such an interesting way to frame problems like this. It's obviously wrong for me to murder to my neighbor for their organs, even if those organs could save five other lives. But is it okay for a car company to roll out an AI program if it saves more lives than it costs? When viewed from afar the utilitarian mindset is always so alluring.
I think you're making a bit of a strawman here that people usually do with respect to utilitarianism. The utilitarian answer with 5 people on an island, where 1 is a doctor, 3 need organs, the remaining person has available organs, and the 3 are necessary to keep the group alive -- is to kill the one person and take their organs.
However, in a society where people can observe the actions of others and form motivations in response to policies, etc, you'll find that because society reacts fairly poorly to organ harvesting, because organ harvesting is implausible to do at scale without extra bad things happenning, etc, the utilitarian solution is actually not to go about doing it.
Only a naive utilitarian wouldn't try to also remain consistent with something like a Kantian imperative of global self-coherence.
Now, as for cars and testing self-driving on real folks, well, this may be something where the water is pretty murky. I think that society will react poorly enough to early bad events in self-driving that a measured approach is actually the best for saving lives in the long term.
The critique here shouldn't be that "well, utilitarianism sure looks good from afar, but would you murder your neighbor?" It should be "The problem is too difficult to address with utilitarianism because it involves complex societal factors and responses."
It's a bit contearian but if you will lose 100% of your population by not sacrificing 10%, then your scoiety is doomed in a way where normal ethics can't really apply. The guy refusing to give up his organs is going to be condemned as the naysayer nihilist who "wants us all to die".
Well, every time this discussion comes up, HN spends >100 posts splitting hairs as to what calling their system 'autopilot' implies.
Given that the technology is useless if you use it as 'intended' (Be aware, and in control of your vehicle 100% of the time[1]) I don't think this debate will be settled anytime soon.
[1] My 97 Avalon drives just fine when operated under those conditions.
Not to mention basing this on a single data point where a fatality occurred. How about the tens of thousands of other drives vs standard? That would at least be statistically relevant.
Even though what they released isn't a self-driving car but a glorified cruise control. I personally don't really see how that is reckless at all.
Which is kind of funny, because every pilot knows the limitation of each kind of autopilot. I think a lot of it is public expectation of that kind of name.
The large print says to keep your hands on the steering wheel and be prepared for the autopilot to fail or disengage at any time. I don't see how this is promising a self-driving car?
There was only one accident I'm aware of in which the Waymo vehicle was at fault; the vast majority have been humans doing stupid things like rear-ending them at stop lights and t-boning by running a red light.
"At fault" is one relevant metric, but not the only one. An autonomous vehicle should at least meet, and ideally surpass, human-driven ones in avoiding accidents for which they're not at fault as well.
I'd be curious to know if Waymo vehicles experience more, less, or the same number of not-at-fault accidents per mile driven as human-driven ones.
That's a tricky one. Avoiding an accident for which you are not at fault could easily cause an accident for which you are at fault. Human drivers have a pretty bad record with that particular situation.
Also, there are some network effects at play here, the higher the percentage of self driving cars the more rare that situation should become.
I agree that there's a tricky calculus involved in some scenarios, but I think there's a much wider domain of not-at-fault scenarios where autonomous vehicles should be able to surpass humans unambiguously. Example: an autonomous car being tailgated by a human-driven car, with a stop sign or light up ahead. Different braking patterns have different likelihoods of causing an accident.
Autonomous cars really have to be able to handle these scenarios at least as good as, and ideally better than, humans: there's not a complicated split-second balancing act to worry about.
> "How do you avoid being rear ended at a stoplight?"
The same way defensive (human) drivers have been doing for a long time: leave room between you and the car in front of you. Gauge cars coming up behind you and roll forward if they need extra room to stop.
Does not prevent 100% of rear-endings, but a large portion of them. Most rear-enders are not cars plowing full-speed into you, it's someone misjudging their braking and not being fully at-rest when they needed to be. Leaving some margin for this kind of error helps avoid the whole incident.
Change lanes, allow the other driver to overtake, start braking a little earlier and more gently, touch the brakes to flash brakelights let the driver behind know that you are stopping soon. All ways i deal with tailgaters in different situations - it's a judgement call though.
That also could be because autopilot is only usable on highways (IIRC) and the fatality rate per accident on highways is much higher than standard town/city driving.
Either way there's not nearly enough data to draw conclusions from.
There's a lot more tesla cars than google cars.
And I don't think there was a single death directly because of autopilot (not 100% sure though).
Also, autopilot is safer, so it actually _saved_ lives.
Yes, I know about that white truck death. By 'directly' I meant situation when autopilot 'went crazy' because of some bug and sent your car in a spin on a highway etc. Probably should have used different wording.
I imagined as much. Even so, "autopilot sent your car in a spin" vs "autopilot ran into a truck" is a distinction without a difference for the people inside the car.
Tesla warned that driver still should be alert, they implemented warning so driver would keep hands on the wheel etc. NHTSA reported that driver had 7 seconds to react on the truck. In a spin situation because of some bug autopilot would create the problematic situation itself, even if driver will be paying attention to the road.
Yeah, the problem was letting people get the expectation that it was an autopilot when it wasn't. Decades of Human Factors research in aviation had already established that people can't maintain focus on a sufficiently-monotonous task.
> The outfall is that it led to bad PR for self-driving cars.
Bad PR? Example? I haven't seen much negative spin on self-driving cars in mainstream coverage recently. If anything it's been highly optimistic while still being honest by mentioning the difficulties the companies are trying to overcome.
I don't think people are under-estimating the challenge here, even the layman. But the potential ROI when it does work would dramatically be safer, environmentally friendly, and efficient with human time.
Given the current state of driving is highly dangerous, we'd benefit from iterative progress towards self-driving cars, ala what Tesla is doing. Controlling risks doesn't have to mean holding tech back until it's perfected.
>While Tesla and Uber have both just recklessly (imo) jumped in and started setting loose self driving cars and making bold claims
Do we know how many lives have been saved from Tesla's autopilot? I've only seen anecdotes but I get the impression that it's already a big net win for safety.
Killing a few extra people today to bring self driving cars to market faster could actually save more lives in total. I have much more respect for Tesla for shipping something.
The system that killed people didn't represent a point of progress for their self driving tech. They decided to abandon the hardware package it was using.
Their current implementation doesn't seem to be learning any higher level behaviors (there's a Youtube video of guy using it in a park where it repeatedly accelerates towards islands in the road and then flails off the road as it goes too fast around the island).
The problem seems to be that Tesla's system is extremely dependent on a nice clear white line at the outer edge of the road. On this road,
near the traffic islands, the roadside grass sometimes overgrows the road edge and obscures the white line. The white line has also been scuffed by cars near the edge of the road. [1] The Tesla runs off the road in that situation.
There are two kinds of self-driving. One came up from the DARPA Grand Challenge, which was off-road. That kind first looks at the terrain and obstacles, and figures out where it can physically drive safely. There's LIDAR profiling of terrain. Then it looks at road markings and figures out where it's supposed to go. If the road markings lead into an obstacle or drop-off, it will stop or go around the obstacle. That's Waymo.
The second kind came up from lane-keeping and auto-brake systems. Those are very dependent on highway markings, and only work right on freeways. That's Tesla's original system.
It's hard to tell about the others. Volvo has an extensive sensor suite. Otto seems to be mostly a lane-keeping system for freeways. Uber hasn't released much detail.
Imagine if the video didn't have the friendly European accent and patches of green. Imagine if that was a real life scenario and the greens were a hazard.
I wonder if there are more such videos that show Tesla Autopilot freaking out. It would be an eye opener, Tesla's stock price is dependent on Autopilot among other innovations.
Using a billion miles of real world experience to decide that that technology was insufficient is, in fact, progress.
It may have gone the other way. Remember, the primary sensor technology in 99.999% of cars out there is two front facing visual-light cameras pointing the same way on a swivel and couple of mirrors.
> Killing a few extra people today to bring self driving cars to market faster could actually save more lives in total. I have much more respect for Tesla for shipping something.
There's no evidence to suggest that is the case. If anything, Tesla has made it more dangerous to assume that it's safe and falsely autonomous.
When you create new drugs, the FDA definitely doesn't look to kindly on people who think like Silicon Valley. "Killing people is product testing" won't fly in pharmaceuticals.
I'm glad that we don't have people like you designing drugs for the masses!
What's the equivalent of animal trials for autonomous vehicles? Closed-circuit tests?
Is there any evidence that these companies haven't gone through closed circuit tests and passed before testing in public roads?
The next step in drug trials is, in fact, testing on humans and some do suffer serious injury or death. I don't think there's any way around it and the greater good of having a pharmaceutical industry at all outweighs those unfortunate incidents.
IIRC, 30-40k people die every year in car crashes in the US. Based on the real-world testing Waymo has done over the last few years (where they tested the cars, but had a (not-driving) driver in the driver's seat at all times and weren't doing any commercial service), that figure would drop to essentially zero.
There's little evidence that anything Tesla has deployed is an important step towards viable self driving vehicles.
They are apparently mapping the locations of odd situations, but that is pretty meh (and doesn't require the mapping system to be in control of the vehicle).
Definetely, Waymo is ahead of everybody, but the race is closer than one would think. In addition to Google and Uber, there exist about a dozen smaller players but with competitive technology.
For example, nuTonomy, the start-up at which I work, has deployed a similar trial in Singapore in August 2016. (btw, we are hiring in everything!).
It's not a "race", it's more like a "marathon": there is a big difference in making something that works 99% of the times (sufficient for a trial like the above), and something that works 99.9999...% of the times (a product that can be actually deployed).
I would argue Waymo is way behind everyone else, but the fundamental point is that companies like Tesla and Comma.ai and most of the car companies have a totally different approach than Google.
Namely, that Google relies heavily on their mapping services to make their cars work. This makes a lot of sense for Google, because of self-driving cars require their mapping data, there's a big new market for them. The maps they run on are significantly more detailed than the public-facing Google Maps/Street View. They work in Phoenix, AZ, Mountain View, CA and Austin, TX and like... nowhere else. (Was there one other city in Oregon maybe?)
As an aside, note the "As an early rider, you’ll be able to use our self-driving cars to go places you frequent every day, from work, to school, to the movies and more." I am curious if this suggests you have to tell Waymo in advance a set list of destinations they can ensure work correctly or something similar, or if I'm reading too much into it.
Tesla, Comma.ai, etc. are not relying on special map data as much as they are relying more heavily on road signals and lane lines and such, and then having machine learning decide how to navigate them locally.
While they may not have the same driving record, everyone else's approach works nationwide (and wouldn't be exceptionally hard to extend to a global scope, presuming you taught them different countries' markers and signage), whereas Google's approach currently does not scale.
The waymo cars have more sensor data and do more local computation than anything out there. Instead of saying Waymo relies on map data, I'd say they have a defense-in-depth solution that integrates a multitude of different sensors and databases.
Do you want a system that only works on forward facing vision? Or only lidar? Or only gps and road databases? Or only machine learning? We know all of these have holes and blind spots, and a safe system wood have redundancy.
I think the actual distance between Waymo and those that evolved from lane keeping is far bigger than people realize and the biggest danger is people shipping MVPs that kill someone or run into a school playground, if that happens, congress will ban these cars or regulate so heavily that progress will be slowed.
Surely your employer would state or demonstrate their cars as being capable of a long-distance trip if they were technically capable of doing so.
Bear in mind, with this announcement, they aren't even committing to "it works in the Phoenix metro area". There's that "parts of" statement, that indicates that only parts of the area is mapped and hence the cars are only capable of operating in parts of the Phoenix metro area.
If you have evidence to the contrary, please, by all means, feel free to share.
You're making lots of evidence free statements due to your anti-Google bias that have little basis in reality. You seem to think that Google is using less machine learning than Tesla. Think about what you're saying, Google, the company which uses machine learning for everything, whose long demonstrated the ability to read street signs and crowdsourced massive ML data on it using Recaptcha, and which makes purpose-built self driving vehicles is doing less than Tesla, whose self-driving capability was an after-thought, the vehicle was not designed from the get-go for Level 3 or 4 capability.
How would Waymo possibly build a car with Level 3 or 4 capability if they weren't doing a hellava lot of onboard visual field processing? Accurate maps data won't let you deal with bicyclists or pedestrians properly, it won't deal with all kinds of hazards.
There's a reason Waymo cars are festooned with LIDAR and cameras and other HW in a giant boxy minivan, and it isn't purely for show to make them ugly on purpose.
Trying to accuse me of "anti-Google bias" is amusing when you are literally paid by Google, and have an inherent conflict of interest/bias of the highest order. Pot, meet kettle. Although, really, I'm not either because I've never been paid by any company with stakes in this discussion. I know you can do better than ad hominem.
I am saying there is no evidence, or even a claim by Google, that their cars are capable of functioning outside a pre-mapped area. You have presented zero evidence of it whatsoever, and are trying to use Google's marketingspeak about how fancy their machine learning things are to insinuate I should assume Google's technology is more advanced than they can demonstrate.
You are highlighting particular capabilities of their collision avoidance on that mapped area to suggest they don't need a mapped area, which is also a non-supported claim. It is entirely plausible, and in fact, likely, that Google's visual mapping for collision avoidance is sophisticated, and yet still fully dependent on a map as a baseline.
Google's marketing claims continue to be extraordinary, and often untrustworthy, and you're going to have to do better than that.
Find me any, literally any, evidence that a Google self-driving car can operate outside the very tiny fenced in areas Google says they can operate. As far as I can tell, there is none, because you are making a claim that even Google itself is not making about it's cars.
Google's marketing claims are extraordinary by running careful, incremental, roll out trials? What company is shipping 'Autopilot' software to drivers today that has actually killed someone, doesn't function in less than perfect highway conditions, whose own fine print says basically not to use it as its name implies, and has numerous calls for it to be disabled?
What does it mean "cars are capable of functioning <in an area>"? What does "Functioning" mean?
Tesla Autopilot don't function off highways, and barely function on highways. So you think someone should get credit for functioning at scale everywhere, if they ship a consumer product with no restrictions, but it fails badly when people actually try it? You're comparing a broken product to one that purportedly works, but you think it's smoke and mirrors?
I love Tesla cars and I plan to buy a P90D, but in my opinion, their whole self driving program is incredibly reckless. (https://www.youtube.com/watch?v=fQxIhMBKblY) Waymo's approach is careful, over engineered, defense-in-depth. Slow by Silicon Valley standards, but you're dealing with public safety. Yes, they use maps as one of their sources of truth, they'd be reckless not to, but they also use LIDAR and vision systems with machine learning, because of course, no map can be real time.
Tesla is trying to sell an upgraded lane-keeping system as a self driving system. Maybe you should be more concerned about that.
I can only say what is public already, but you can look at the disengagements data to see Waymo cars are three orders of magnitude better than their competitors.
I don't disagree Tesla's marketing is dishonest as well. But if Google's system only works on predefined destinations, essentially within a controlled environment, it isn't even really in the game.
If the lane lines are hard to read in a part of Phoenix, Google can ask for them to be repainted before approving the cars for that area but everyone else just has to assume bad lane lines are something they'll contend with.
Is there an NDA these early riders will have to sign?
Do they have to provide a list of destinations ahead of time?
I feel this announcement makes it seem like these cars are ready for public use. But I don't see the evidence to suggest they really are. And as you indicate, Googlers aren't talking.
Autoplaying video. Here's the full text so you don't have to endure that:
-----------------------------
After almost a decade of research, Google's autonomous car project is close to becoming a real service.
Now known as Waymo, the Alphabet Inc. self-driving car unit is letting residents of Phoenix sign up to use its vehicles, a major step toward commercializing a technology that could one day upend transportation.
For the service, Waymo is adding 500 customized Chrysler Pacifica minivans to its fleet. Waymo has already tested these vehicles, plus other makes and models, on public roads, but only with its employees and contractors as testers. By opening the doors to the general public with a larger fleet, the company will get data on how people experience and use self-driving cars -- and clues on ways to generate revenue from the technology.
"We're at the point when it's really important to find how real people, outside the Google environment, will use this technology," said John Krafcik, Waymo's chief executive officer. "Our goal is that they will use this for all their transportation needs."
Waymo is letting people across parts of the Phoenix metropolitan area apply for the service as part of an "early rider program." Initial users will be able to book Waymo's minivans using an app, but won't have to pay. Dollars will flow eventually, Krafcik said, yet he declined to share details. The company is signing up hundreds of people with diverse backgrounds and transportation needs.
Google is a pioneer in autonomous cars, launching its research program in 2009. After mostly ignoring the project for several years, the auto industry has recently rushed to catch up, pumping billions of dollars into similar technology and engineering talent. A bevy of newcomers have joined too, including some founded by former Waymo engineers, making the field incredibly competitive before anyone has made money.
Uber Technologies Inc. has emerged as a particularly bitter rival. Last year, autonomous vehicles run by the ride-hailing giant began picking up paying customers in Pittsburgh. Earlier this year, it started doing the same in Tempe, a town in the eastern part of the Phoenix metro area. (Waymo is currently suing Uber over the technology.) Yet Waymo insists its business model will be broader than Uber's.
"Yes, self-driving technology makes sense for ride-sharing," said Krafcik, a former executive at Hyundai Motor Co.'s U.S. operations and Ford Motor Co. "It also makes sense for personal car ownership." Transportation to and from transit hubs and logistics also made his list. In Phoenix, Krafcik said participants will use the autonomous minivan fleet every day, at any time, to go anywhere within an area twice the size of San Francisco.
Last year, Waymo inked a deal with Fiat Chrysler Automobiles NV for 100 Pacifica vans outfitted with Waymo's software and tailored hardware. Waymo added the fleet to the to 70 other cars it is testing in California, Texas, Washington and Arizona, which it entered in 2016. Since Google started its program, those vehicles have racked up nearly 3 million test miles on public roads, primarily to refine the autonomous software and ensure the system could handle rare but potentially dangerous edge cases.
Waymo has faced criticism for not launching a commercial service sooner. This was especially true last yar when it lost several top engineers, and Uber launched its limited test service. Krafcik has often responded by pointing to safety concerns and technical obstacles to deploying fully driverless cars.
The Phoenix service answers some of these concerns. It's a clear move beyond the research phase that focuses on passenger experience and business model development. Waymo's staff has worked on new displays and controls to get people comfortable being inside self-driving cars. The Phoenix passengers will be the first to see these tools in action.
Waymo is still moving cautiously. Chosen users for the Phoenix service will sit in passenger seats, and Waymo will put contractor or employee testers in the driver seat -- although Krafcik said the goal is to remove them eventually.
The company has quietly been testing the service with a handful of Phoenix residents for two months. From those trials, he noted one behavior trait when no one has to drive. "People have a better opportunity to bond and connect inside the vehicle," he said.
Just want to say it feels pretty cool that Phoenix gets to be the testbed for this!
Self driving waymo/uber cars around Tempe (the suburb where Arizona State University is) have become part of the landscape.
Uber has also done a really good job of using their self-driving cars for marketing. My partner and I have taken quite a few uber trips that we wouldn't have otherwise, just because we've wanted to try and get a ride in one of the self-drivers.
Interesting that they only let you borrow for now. Will car ownership be killed with self-driving cars? Will Waymo let you actually buy and own a car that isn't network connected and fully yours? Seeing how value extraction works in today's enterprise, i.e. via rent-seeking, I'm guessing we're headed to not owning cars any more.
> Will car ownership be killed with self-driving cars?
Probably. It seems way more convenient and efficient to be able to summon a car to any location on demand than to have just one car that you use for everything.
No more parking, no more refueling, no more vehicle maintenance; just push a button and a car shows up within minutes (or in some cases, seconds) to pick you up, and step out of the vehicle when you reach your destination. Want a specific model or paint color? It's just a couple taps on your phone away. Car breaks down and needs a tow? Replacement gets summoned automatically, and the car company credits a couple bucks to your account for the inconvenience.
If that kind of service becomes available at a reasonable price point, I suspect it won't be long before car ownership becomes much less common.
Presumably slightly less screwed if you reserve beforehand, though, and I suspect what would actually happen is massive carpooling during peak times (also, small one-person cars that are cheaper)
Same way any other business that has brief periods of high demand (electric companies, ISPs, restaurants, etc) does; you anticipate the demand and make sure you have enough capacity available at those times to meet it.
With a self-driving car fleet there are probably lots of other little tricks you can do too, like offer discounts for sharing a vehicle with another passenger, or use machine learning to predict what locations will have the highest demand during particular parts of the day and move portions of the fleet to cover those areas.
My non-ML prediction of which locations will have the biggest demand in Seattle:
People on the West Side who want to get to the East Side, and people on the East Side who want to get to the West Side. (You'd figure that maybe optimizing this would bring more value to society then making a car drive itself...) 8am-9am, 4pm-6pm. You don't need ML to figure this out - just look at a Google Maps traffic heatmap in rush hour.
Carpools, vanpools, already exist. For some reason, though, the vast majority of the cars parked on the freeway are single-occupant vehicles. I can't imagine why people aren't keen on sharing their vehicle with a stranger, or pay triple surge pricing to get home when it starts raining.
This community is highly opposed to relying on web services that they don't control... Yet it is perfectly OK with ditching their car, and instead relying on a car-on-demand service to get to and from work?
No, it won't be efficient. People here simply are not thinking realistically.
I want to go out to shop. Right now, I get out in my car, drive for twenty minutes to the mall, and leave when I want to do so. My kid is with me, but on the way back he gets sick, so I have to cut short my drive and head back home. he throws up in the car. I clean it when I get home, and go back out to the store five minutes away by car to pick up some medicine for him as my wife watches him.
HN "solution."
I page a car. It takes maybe ten to fifteen minutes to drive to me from the last passenger, since it only is designed to head back to dispatch every so often. The car reeks of weed, because why not light up a bud? It's not like they can arrest you or anything. I punch in a complaint on the app.
We get to the mall. I shop, and summon a car. This time it takes a little less, maybe five to ten minutes, because the mall is closer to dispatch or a place where a lot of cars get dumped off. I program my next shopping route. My kid gets sick mid-ride though.
I have to hit the emergency button, and reroute back to my house. My kid gets sick in the car. When we get home, I have to hit another button to "rent" the car another period so I can clean it, but it still gets sent back due to the emergency button being hit, and I can hit with a nice big fee due to cleanliness issues. I now need to go back out to get medicine, but five minutes by car is 30-45 min by walking. So it's another 15 minutes to get a car.
Even if all goes well, I could be looking at an additional 45 minutes just getting stuff done, as well as cost and loss of passenger quality because I don't own the vehicle. Hell, it could show up at my door full of shit, semen, and vomit, because some homeless guy is using it as his own personal hangout to sleep in. Or it could take even longer because I need to rent a driverless suv so I can bring furniture home to me, and those are relatively rare and command a premium to rent.
No one here is thinking of day to day use by all kinds of people, or what benefits ownership provides. I own my car, that means I dont have to worry about it showing up in a horrid state, I don't have to worry about paying large amounts for minor cosmetic damage, I don't have to budget extra time summoning the vehicle on site (which escalates the farther out from a city hub you are...you're doubling your travel time, and this would heavily penalize rural people), and there are a host of issues eliminated as well. Just the fact that I don't need to put a destination in is one thing. I can change my mind en route, I can wander, and if my wife calls me and tells me to come home, I can do so near instantly.
> It takes maybe ten to fifteen minutes to drive to me from the last passenger, since it only is designed to head back to dispatch every so often.
What makes you say that? The median wait time for an Uber in many cities these days is [around 3 minutes][1]. Were self-driving cars to become widespread, I imagine that time would be significantly reduced due to a drastically higher density of available cars. (Instead of just a few Ubers, imagine if nearly every car on the road could potentially come pick you up.)
> The car reeks of weed, because why not light up a bud? It's not like they can arrest you or anything. I punch in a complaint on the app.
Plausible. Passengers who do that though would most likely get fined the same way you would if you tried that in a rental car, so I don't imagine it'll be widespread. And if you prefer you can always summon another car. Should be there in another 60 seconds or so, if not less. The weed-scented car will head back to dispatch for cleaning so none of its future passengers have to deal with the smell.
> We get to the mall. I shop, and summon a car. This time it takes a little less, maybe five to ten minutes, because the mall is closer to dispatch or a place where a lot of cars get dumped off.
Probably more like seconds because there would be 5 or so cars already waiting in the parking lot for shoppers who might want to leave soon. In fact, it might even be faster than having your own car in this case since the self-driving car can pull right up to the mall exit to pick you up.
> I have to hit the emergency button, and reroute back to my house.
No emergency button necessary, the app lets you change your destination in-flight. Actually, you might even just be able to call out "okay Google, take me home" and the car will hear you and reroute.
> When we get home, I have to hit another button to "rent" the car another period so I can clean it
Well, no. You just report a mess in the car and it drives back to dispatch for cleaning. A bit expensive for you due to cleaning fees sure, but necessary to ensure the car is clean for other passengers, and as a bonus you don't have to clean up the vomit yourself. Hopefully this isn't a regular occurrence for you.
> I now need to go back out to get medicine, but five minutes by car is 30-45 min by walking. So it's another 15 minutes to get a car.
Again, probably 3 at most. More likely under a minute.
> Hell, it could show up at my door full of shit, semen, and vomit, because some homeless guy is using it as his own personal hangout to sleep in.
Again, probably a pretty rare occurrence, and if it does occur all that means is another minute or so of waiting for another car to show up.
> Or it could take even longer because I need to rent a driverless suv so I can bring furniture home to me, and those are relatively rare and command a premium to rent.
SUVs aren't nearly that rare on today's streets. In a driverless car future they might be somewhat more rare because people will only request them when they need the extra trunk space, but even then it's hard to imagine the wait time being more than 10 minutes in even a small city. If you're requesting one from the furniture store it might even show up in seconds (because lots of people coming from that location want SUVs, so a computer algorithm has arranged to have several nearby at all times). If not, you can always schedule a pick up a few minutes in advance to make sure the car is there waiting for you by the time you get out of the store.
Unfortunately, the concept of something being fully yours seems to be what's going away. So much of what we "buy" now is locked down by the manufacturer to the point where you cannot even hope to modify or repair it. Automobiles, in particular are getting less and less modifiable every generation. More and more electronic gadgets in the home are locked into only running manufacturer-selected software. Everything is network-tethered to the manufacturer, who ultimately controls whether/how their products operate. Sad times.
Self-driving cars have been in the development phase for so long, I'm really excited to see them start rolling out in these beta tests.
I'm curious what sort of unconsidered edge cases they'll find out in the real world. I'm sure test passengers are much more "disciplined" than real world ones.
I would like to see little leased pod cars which trundle around at the local speed within a certain range, and have the ability to go to a Bus Lane (I presume the bus is dead), and join a Magnetic Track of sort where they go 200 miles an hour.
That would be pretty amazing. Why they are sticking shit all over 2 ton vehicles? If they are solving Fuel and Emissions and Driver, can't they just take the final step of a revolution? I am pretty sure governments would be throwing notes at it.
Hyperloop One is working on this. Small self driving cubicles drive around the city and pop into larger pods that go 700 mph in the Hyperloop between cities.
Seems a bit gimmicky (it's the hyperloop after all). If the self-driving car is the vehicular equivalent of a hot desk, why not just exit it and walk into the hyperloop pod, instead of transporting tonnes of weight of interchangeable self-driving car? Then just hop into a different one that's waiting at the other end.
Disclaimer: I'm starting a job at Hyperloop soon, but haven't actually been told anything about the hardware yet (I'm joining as a software dev), so this is all not-NDA-violating speculation:
I believe the cars are also pressure vessels and life support systems. My student team looked into this side of the problem last year, and making the entire ~40 ft long Hyperloop pod a pressure vessel is really tricky. The two competing forces are the obvious premium on extra mass, and the need for a large opening to load cargo into the pods.
Dealing with all the "human compatibility" problems at the scale of the cars lets you use much smaller pressure vessels (lighter), and makes loading unpressurized cargo drop dead easy.
The other big benefit is that loading the Hyperloop pods is really difficult. You'd ideally like to be launching a pod into the tube every 30 seconds or less, to get the best return on your infrastructure investment. Achieving that by walking people right up to the 40 ft vehicles and having them all file in like a commercial flight is just about impossible. With mobile sub-pods you can have a far smaller station, and just load sub-pods as they arrive into the next departing pod.
I think the size of vehicles is related to safety features and mass being safer in collisions in addition to the expectation for capacity.
But I agree the it is ludicrous to continue with these massive vehicles if we can find a way to move forward. I think the other massive vehicles present a safety hazard to small vehicles so you may need to separate them physically.
So now what do we tell our kids? "Don't get into a van even when nobody is driving it." ? :-)
I think it is great that they are getting additional exposure to nominally real world users here. However, I'm not exactly sure what they are learning in user behaviors. Is it "Can we make a less expensive livery service?" or is it "How freaked out do people get in self driving cars?" or is it something else?
I went to a conference last week where there were several talks that were pretty critical of self driving 'hype' given the HLS[1] issues and the ability to inexpensively 'spoof' the AI[2] to see something that isn't actually there (road signs being particularly vulnerable). It left me thinking I might be more optimistic about the technology than I should be.
[1] "Health, Life, Safety" the general basket of things that are super critical to minimizing injury and death.
After seeing the movie Logan, I was curious if the movie was taking a jab at the autonomous vehicle trend by portraying them as a danger to ma & pa drivers on the highway. It showed a crowded highway full of shipping containers pushing a truck hauling horses off the road. It seems the message was not that the technology can't be courteous, but that once it's accepted, corporations will abuse the roads to help their bottom line. It makes me wonder if that is a valid concern.
> but that once it's accepted, corporations will abuse the roads to help their bottom line
Those evil corporations, being all corporate-y!
I haven't seen the movie Logan but the gist is that you think BigCorp will modify their cars so they drive dangerously fast, threatening the other non-AI cars on the road? And they will risk killing people or damaging vehicles in order to improve shipping time? And you think this will be prevalent? There won't be economic, legal, hiring, or social consequences from police, shareholders, politicians, employees, and the AI companies? Because they're big corporations run by greedy CEOs who get away with anything?
That was what I gathered from the movie, though I wrong apparently, as someone explained. No where did I put it that I felt that way. I do find a lot of people's desire to take everyone's keys away puzzling, but I don't foresee any cartoonish supervillainy on the highway. It is odd how hyperbolic everyone gets whenever any concern is expressed, you'd think people who've seen the piss-poor code many companies run on would be a little more cautious about completely surrendering to them. It will be a great tool, but Johnny Depp hasn't transcended yet.
> you'd think people who've seen the piss-poor code many companies run on would be a little more cautious about completely surrendering to them.
Someone brings this up every self-driving car thread. But has there been any incidents caused by code released too early? Google has been working on this for almost a decade and hasn't released anything. Tesla released what is basically a glorified cruise control and deployed an advanced sensor package that is disabled by default.
I'm not seeing this recklessness that everyone is so worried about.
Again, the bar is a system better than current human drivers - as we know for a fact that driver error is one of the leading causes of accidents (over 1/3 of accidents).
By all accounts, autonomous vehicles will be much safer than human drivers. Companies will be sued up the ass if they program their vehicles to take risks and end up killing people.
Define "taking a risk." Every time we get in a car, we're taking calculated risks based on our evaluation of the physical environment, how we expect other drivers to behave, and expected behaviors of pedestrians/bikes/etc. At least up to the point where you create a traffic hazard, there's always the option to take things slower and more conservatively.
While that came about as a form of corporate abuse, the scene in the movie was less about corporations abusing public roads to improve bottom line, and more about a specific corporation trying to kill a specific farmer to take his land!
CTRL+F for "So they were on highways today on
those trucks"
The whole scene is about Big Corn trying to take a specific plot of land whose owner would not sell.
In that situation, it's less about the dangers of autonomous vehicles, and more about corporations are willing to murder others. The next scene involves guns and fighting.
Of course it is valid. I doubt that any major company will countenance literally forcing people off the road but they will probably lobby have the law do it for them.
My fear is MORE urban sprawl, which is even MORE miles of highways to pave and care for, and MORE pollution in the short term until we get very eco friendly powered electric cars.
Commutes are one of the only things that seem to limit urban sprawl.
I do wonder if there is some vehicle ownership path dependence that feeds into sprawl though. If vehicle access becomes less of a binary choice, people don't double down on choices related to ownership.
There is an advantage to electric cars here though. If you don't need to wait while the vehicle charges electric vehicles are a lot more attractive even with relatively small batteries.
Yes, there are still issues with electricity generation. Expressing this as just pollute upstream ignores or dismisses the efficiency improvements of large scale power generation over per-vehicle internal combustion engines.
Not quite true - generating upstream is much more efficient, since the power plant doesn't have the weight restrictions that a car (which needs to haul itself around on four wheels efficiently) has. So even if power plants didn't ditch fossil fuels, electric cars are largely a step up.
That said, we are moving heavily towards renewables, and charging isn't time-critical so peaking power sources like Wind will be more than good enough.
I prefer to state is as, more freedom to live where you want and how you want. still find it odd people complain about commute times by car when by rail/bus can be just as bad in many places.
while many choose to live outside of cities many are forced to because of bad city planning, regulations, and other government practices, which only served to protect vested business and political interest.
still I say though, you should be free to live where ever you want and can afford to.
People think the autonomous car is going to solve a lot of the current problems; Accidents, Parking, Congestion. It kind of makes me giggle a bit, as I ponder what may happen. That is, the conclusions drawn from "self-driving" vehicles are possibly way over cooked.
The certification process for these cars are going to be very, very expensive. Think about the software involved for the operation of aircraft and the review / testing process then apply it for the road. That is level of scrutiny that will be placed on "self-driving" vehicles.
Hacking, equipment failures, environmental surprises and more will be sure to keep a healthy portion of the public skeptical and possibly downright hostile to the idea for at least a decade, post certification, in my opinion. "Self-driving" will be contained and tightly regulated.
What people on the coasts forget about is, really, the rest of the country. The people that actually enjoy driving, where parking isn't a problem, and congestion takes place in the morning and after work, but only lasts about 5 minutes. Metro areas of > 2 million people don't have much problems with traffic in flyover country... at least not when compared to the coasts.
The counter arguments to the piece are plentiful for the foreseeable future.
If driverless cars work at all, they'll be substantially safer than human-driven ones. There's a strong moral case for banning human-driven cars on public roads if it'll save tens of thousands of lives. There's also a strong possibility of media-driven moral panics. Think of poor little Billy, hit by one of those irresponsible human drivers!
Insurance companies will charge steep premiums to the risk takers who want to drive their own car. Alphabet etc. will have a well-oiled team of lobbyists in every state and country to pass laws to make driving safer for everybody (i.e. more optimized for Waymo cars).
There will be a lot of people who join the NRA (National Retrovehicle Association), who'll make a stink about things. And they'll fight a rearguard action and probably win some guarantees in the more rural states. But where 90% of Americans live, the economics and legalities will work out to push everyone to self-driving cars.
And then economies of scale will be lost, and even in the remaining areas human car driving will become a luxury for people who own their own private roads.
> If driverless cars work at all, they'll be substantially safer than human-driven ones.
That sounds like a bold claim. I mean, the current standard of technology works to quite a good extent already, performing many aspects of driving with considerably greater acumen than humans and being able to operate for extended periods without human intervention. It's also fairly well documented that the number of situations in which the software would have caused an accident without human intervention is above the average accident rate for human drivers, despite being tested in favourable conditions with sensibly conservative rules around humans intervening for a wide range of reasons before a critical situation can develop.
So it already works whilst being substantially less safe than the average human driver (without human intervention). They're working on the making-it-safer part, but given the number of edge cases they've got to deal with for Level 5 driving capability I'm not sure we should be so sure it'll surpass human driving abilities in all road conditions.
Fair enough. I agree that there's no way they'll be allowed to be sold as a consumer product to operate hands-free on regular routes unless they become regarded as substantially safer than the median driver.
>Insurance companies will charge steep premiums to the risk takers who want to drive their own car.
People make this claim. Why would there be a premium over what is paid today? Maybe there's a big premium relative to much lower premiums associated with new types of vehicles although there will still be insurance needed. But that's relative to a future state, not relative to today.
And as less and lass cater to your now fringe activity, you will be herded into a specialty insurance, and it will go up. It will be a 180 from 100years ago. The rich will pay to drive themselves.
> What people on the coasts forget about is, really, the rest of the country. The people that actually enjoy driving, where parking isn't a problem
Doesn't something like 80% of the US live in urban areas? And hasn't that number been continuously increasing for decades. People in the cities enjoy driving too, hell I love driving when their isn't any traffic. But I'd give it up in a heartbeat if it meant less needless traffic deaths and a far more streamlined lifestyle.
>Doesn't something like 80% of the US live in urban areas?
Technically speaking by US Census classifications but the definition of urban is very broad. You basically need a population of over 2500 to be in an urban cluster.
So where I live is "urban" and between myself and two neighbors, we're on about 70 acres of land.
>The certification process for these cars are going to be very, very expensive. Think about the software involved for the operation of aircraft and the review / testing process then apply it for the road. That is level of scrutiny that will be placed on "self-driving" vehicles.
This is one of the most scariest points to me. Are any entities working on lobbying Congress about this? I've seen nothing on it, yet we already have a handful of these technologies on the road. One can retort with what offerings there are are "safer" than humans, but that's missing the point entirely.
Transportation in general will become much cheaper. Because driverless cars will get into far fewer accidents than humans the cost of insuring them will be a fraction of what it costs to insure a human driver today. And when transportation gets cheaper basically every product that needs to be moved gets cheaper (assuming the companies using this technology don't just pocket the savings).
Also, basically drivers of all kinds will no longer have jobs.
> Transportation in general will become much cheaper. Because driverless cars will get into far fewer accidents than humans the cost of insuring them will be a fraction of what it costs to insure a human driver today.
Insurance is a small fraction of the cost of transport by car, so dropping the insurance cost by some fraction won't drop the total cost by that much.
Actually insurance in the trucking industry is pretty expensive [1]. In addition to commercial vehicle insurance for the actual truck there are frequently other types of insurance like cargo insurance and trailer insurance which would be affected by self-driving cars. Because trucking is already such a low-margin business any reduction in cost is actually a pretty big deal.
> Because driverless cars will get into far fewer accidents than humans the cost of insuring them will be a fraction of what it costs to insure a human driver today.
The average US driver files a collision claim once every 18 years. I'd like to know the distribution on that one, but anyway, it doesn't seem like there is a reason to believe this.
How are you so sure driverless cars will get into less accidents? Furthermore, how is this technology going to be make everything less expensive. Please explain.
For the record, I think this technology will make the prices for these vehicles prohibitively expensive for the mass market.
> How are you so sure driverless cars will get into less accidents
Same reason robots in manufacturing are better than humans. They don't have bad mood, they don't have bad day, they don't need to speak on the phone while driving, they don't drink etc.
> how is this technology going to be make everything less expensive
Because in Europe/USA most part of the cost in low-weight vehicle transportation is salary of the driver. Also self driving cars will be less costly to maintain (less accidents, better monitoring), if big fleet operated by companies they also have benefit of large scale etc
Well for one, you don't need to own your car. You just use it. If you (or anyone you know) owns a traditional automobile, it sits unused for 90% of its life. In a driveway, on the sidewalk, in a parking lot, etc. You pay for 100% of the car, but only use 10% of it. If you could rent it out to others ('the grid') while you're not using it, they would share some of the burden, lessening yours.
They make things safer because they don't drive drunk.
You know there is a whole legion of jobs that require a vehicle? All construction jobs for example. Most service techs require a bunch of expensive tools and need a secure place to keep them on site. The list literally goes on and on...
Can an hunter ride share the pacifica out to the forest with his/her hunting gear? How about bicyclists? People with kids know about activity shuffling... I am not sure that sharing will work for my families schedule.
Driving drunk isn't the bulk of the accidents I go to on a regular basis. If we could just eliminate texting while driving, we could see a significant drop in accidents.
Once they're available (and I see no reason not to continue thinking that door-to-door self-driving is decades out), the ownership question definitely gets interesting. But I don't think it's as obvious as some think in part for the reasons you say.
Furthermore, a lot of people use their cars at the same time of day. It's also the case that a lot of auto depreciation is mileage-based rather than time-based (especially in non-snowy/salty environments).
The questions others raise about the maintenance/update requirements for these sorts of vehicles are interesting ones.
My guess is that, like Zipcar and Uber, they will make a difference at the margins but won't cause a wholesale shift. But who knows? Especially if, as I believe, this happens over a number of decades with demographic shifts happening in parallel.
Can an hunter ride share the pacifica out to the forest with his/her hunting gear? How about bicyclists? People with kids know about activity shuffling... I am not sure that sharing will work for my families schedule.
Easy: many households now have two (or more cars), but self-driving cars will allow many of them to reduce to a single car. One people carrier for family outings, on-demand robot taxis for commutes and quick errands. Maybe not a total revolution, but a 50% reduction in household car ownership would be a pretty big deal (it'd make neighbourhood parking a lot easier for a start).
Do people not get that this also shortens the life of the car?
I mean, let's say I use it for a total of 12,000 miles per year. that's 10% of the time. If I rent it out for an additional 50% of the total time, that's going to hit 72k miles in a year! For the average person they are going to have to replace cars ridiculously fast, and increase, not lessen, the burden of car ownership.
The additional hardware and software will be expensive as hell to develop, but once it's invented, the marginal cost should be fairly cheap compared to the price of a car.
processing power = cheap
software (marginal cost) = just gotta pay for updates and fixes
sensor package = expensive now because nobody is making 6 million of them a month. Once they do, it'll be a lot more affordable.
That doesn't mean it'll pay off for Google, uber, Telsa, Ford, etc., but if they build it, it'll move to all cars rather quickly.
> The additional hardware and software will be expensive as hell to develop, but once it's invented, the marginal cost should be fairly cheap compared to the price of a car.
Go talk to a pilot and ask them about Garmin products and the autopilot of their plane. Also ask them about update the software on these devices. Ask how much money they are asked to fork over for software...
Have a look at the self-driving tech of the John Deere tractors and how well that has gone for the farmers. I don't think the bulk of people here have really thought about this in a negative light.
AIUI, about 300,000. For the entire world. There's more than that number of cars in just about any city in the world. I'd expect tens or hundreds of millions of cars.
Aeroplanes aren't even close to the scale we're talking about.
Yet its cheap enough that every commercial plane has it and every tractor has it. Sure they'll try to take their pound of flesh but not to the extent it would make the products affordable.
GM, Apple, and Uber are all testing in the Phoenix metro area, so I don't think that's valid - more likely good wide roads, weather, and regulatory environment.
This is an important step for testing, and I'm surprised that this is being spun as a move towards commercialization.
As of right now, waymo has only been doing what could metaphorically be called unit testing. That is, they test the cars behavior in very controlled but unrealistic environments, looking for very specific responses. The accident rate that they've incurred is likely ridiculously skewed: they've been driving in good weather, on meaningless routes (not chosen by destination, but by route features), at relatively safe times of day, at slow speeds, and they've been doing it extremely cautiously with engineers ready to take over in a moments notice.
This is exactly what they should have been doing, but politically it is misleading. Most human drivers, given those same constraints, would also do extremely well and way better than average. They've done well, but we have little basis for comparison with the average driver.
This is the first step towards integration testing. They get to see how the cars behavior integrates together across various scenarios that are much closer to real life. They are driving on actual routes that real people travel on...routes that aren't chosen in order to test a specific behavior.
Accident rates are going to go up. That's a good thing...its a move towards the things humans find more difficult too. We should, however, expect slowing progress to level 4 autonomy. This is typical of system capability growth; exponential in the beginning, asymptotic near the end. People that are rushing this are out of line; akin to immediate commercializations of lab rat successes. Give them time.
Pretty damn sure. Even if an engineer drives his normal car to work, hops in a waymo car, and drives 9-5, and then drives a normal car back home, they've already constructed a scenario that is heavily time-weighted towards lighter than human-representative traffic experiences. Even small biases that you don't even think are meaningful can have a big impact on outcomes.
But it doesn't stop there. Google hasn't tested snow at all. I'd be surprised if their testing locations represented rainy weather that is representative for more than 10% of the US. They don't test freeways, and by extension, do not test onramps and offramps. In California, they don't even go over 25 mph. Under those conditions they have been ostensibly pretty good at not causing accidents, but how good are they at avoiding accidents that are not their fault? This means accidents caused by other drivers and acts of god. Safety is far more than just causality.
But more importantly, their miles traveled very easily exclude small snippets of time where an engineer takes over, but still counts the rest of the journey where the computer did fine. How representative is that? How good are humans when we let jesus take the wheel the moment driving gets slightly complicated? If a human overrides the computer only 1% of the time, that could still be anywhere from 1% to 100% of trips with at least one human override.
It's very easy to catastrophically mislead yourself when working with averages [0]. Statisticians and Economists go through absolutely absurd lengths to make comparisons robust. Even if we had access to 100% of the data on Waymo's self driving cars, I don't know a single statistician that would be comfortable making a comparison to extremely coarsely aggregated data on human driving. Only through extensive testing in situations that are representative of how humans (and not just test engineers) drive will we be able to conclude that one is safer than the other. And that's why Phoenix is important to Waymo.
I dream with the day that humans riding vehicles will be forbidden by law unless the driver would prove the need for human driving. So many deaths would be avoided.
No I'm very serious: we are unable to protect smart TVs. Networked smart cars (because let's be real - Google and co. would want to monetize this with Uber-like services, Ads in cars etc.) are a magnitude more likely to be hacked by both the state actors and rogue individuals (altho I don't see why we make this distinction...both are criminal acts).
The worst that can happen if your smart tv is hacked, is that they can listen and maybe watch you in your living room. If your car is hacked, they can probably kill you and hurt others while they are at it. If this happens on a large scale, it's extremely dangerous.
So while I have no problems with self driving in itself (or with Google for that matter of fact), I have a problem with banning human driving cars on the road without a viable (non-intrusive) alternative.
We are not "unable" to protect smart TVs. If anyone gave a shit those things would be locked down tighter than a bank vault, but we don't, so they aren't.
But, as evidenced by terrorism paranoia, people do care about their personal safety--perhaps excessively and irrationally so. Security in a networked self-driving car won't be treated as an afterthought, if nothing else because of the massive legal liability.
> We are not "unable" to protect smart TVs. If anyone gave a shit those things would be locked down tighter than a bank vault, but we don't, so they aren't.
Every year, every major browser and major OS gets cracked in Pwn2Own. The cost of these zero-days is going up thanks to good security practices like sandboxing in Chrome et al., but they still happen. Every. Single. Year. Think about that for a second.
The fact is, we can't write secure software, especially secure networked software. I can't imagine writing secure software for cars using current best practices---let alone imagine car manufacturers doing that. To solve this, we'll need (in addition to everything we're doing now) something along the lines of Rust to prevent these sorts of bugs. And even then, it's likely that the net effect will be similar to sandboxing: the cost of an attack will go up, but not enough to prevent random individuals (let alone larger organizations) from finding zero-day security vulnerabilities on a regular basis.
It's extremely important not to be cavalier about this security risk. The combination of zero-days and self-driving cars is a scary one.
I seriously doubt this. People do care if their car gets stolen. Unfortunately most (if not all) built-in electronic systems for protecting modern cars from being stolen are crap and thieves somehow get around them. Many of these systems (e.g. Keeloq) used "roll your own" insecure cryptography based on security by obscurity, despite availability of much stronger algorithms at that time.
It sounds like a nightmare to me. God forbid I live in a state that won't allow me the enjoyment of safely operating a vehicle. Maybe they can choose what I'm allowed to eat as well. Please relieve me of all this self-reliance big brother...
Nobody will prevent you from the enjoyment of driving, but just as it is now, you will probably be required to insure yourself against the externalized risk associated with operating your vehicle, and that risk will be measured relative to your other transportation options.
It's the same reason that you pay relatively high car insurance premiums when you are in your teens and twenties.
We'll manage. Already a lot of critical technology (electric grids, military weapons) is connected to the internet. Civilization hasn't collapsed in spite of that.
Well, nowadays someone can easily hijack a truck and drive it in a crowd of people. And maybe self-driving cars don't need to be connected to the internet?
Yeah, because computers never crash right? <pun intended>
I think way too many people are viewing self driving vehicles with rose colored glasses. They underestimate the cost and overestimate the utility to the public.
If a computer crashes, you can identify the software or hardware flaw that caused the crash and fix it once and for all.
Improving human driving skills collectively is much more difficult. Even if one driver learns from their own mistake, other drivers and the next generation of drivers will still be just as prone as before to make that mistake. Better education can only do so much anyway, since people often don't think rationally and don't necessarily make safety the number one priority (e.g. drunk driving, trying to get to the destination as quickly as possible, or reckless driving for entertainment purposes).
> If a computer crashes, you can identify the software or hardware flaw that caused the crash and fix it once and for all.
Right. Except that 40 years later computers still crash. The more complex a system the harder it becomes to debug and there is no complex software system without bugs. It's like rocks in a field, you'll never get the last one out even if you think you did.
And 100 years later humans still do stupid things in traffic (not to mention they also "crash" - fall asleep, get strokes, whatever). Why does a computer driver have to be flawless when human drivers are anything but?
It doesn't have to be flawless, just better than humans.
And that's not a simple thing: it will have to be better than humans in every situation, just being better in some but clearly worse in others will still end up looking bad for self driving cars. People don't deal in statistics, they simply look at the situation and say 'that would never happen to a human' and will keep on driving themselves.
So it's a very fine line to roll this sort of thing out, do it too early and you end up turning off your potential early adopters.
Regular software, yes, they crash all the time. But I would think these systems would be more safely handled. For example, nuclear energy, air traffic control, bank systems and automated train lines don't crash nearly as often, for example.
Being such an important feature (not crashing) it shouldn't be ignored. I am also skeptical but we have examples of reliable systems so I believe they can work.
> nuclear energy, air traffic control, bank systems and automated train lines don't crash nearly as often, for example.
Bank systems crash with great regularity. Source: worked for a bank, customer of a bank.
Industrial systems and anything related to flight is typically much better from a design point of view than your average bit of firmware. Redundancy is built in from the first day and all failure modes are tested in so much as feasible.
Also, and this is very important, such software is kept as simple as possible to reduce the surface area bugs can hide in.
Also, all of these are much simpler problems than fully-autonomous operation of a vehicle on a regular road surrounded by other users, which involves teaching the system how to deal with an enormously long list of edge cases.
I don't envy people tasked with balancing the need for self-driving car software to be able to handle a sufficiently large number of possibilities to consider eliminating the driver with the need for it to have a small surface area and sufficient degree of tractability to be able to minimise bugs.
We build lots of computing systems that rarely have components crash and almost never (eg, not even once over decades) have the whole system go down. We know how to do it.
It's just not worth the effort in the vast majority of circumstances -- so you get computer crashes.
I know we know how to do it. It's just that the cost-benefit analysis usually falls to 'let it crash' rather than 'let it crash and put a supervisor on top of it'. Besides the fact that most developers don't care too much about their stuff crashing (for instance: one valid excuse for not digging in and solving a problem is 'can't reproduce', whereas those are the worst problems to allow to persist).
In the end it is all economics. But I really would like to see some real life stats on how these self driving cars perform over the longer term in less controlled situations and with an honest accounting of the root cause of any issues. Fat chance that will ever happen but that's the sort of rigor that would get me to switch from driving myself to being driven by a pile of software.
That's the good bit about Google being involved: at least they really 'grok' software and reliability engineering. As opposed to your average car manufacturer where software is a dirty word best outsourced to some contractor.
There's a flip side to that though. You may have a systematic vulnerability or bug that puts thousands of vehicles at risk. Bruce Schneier has spoken about this issue. An expert lockpick can still only pick one lock at a time. A super-skilled hacker or team can penetrate many devices in one shot.
So far the statistics are overwhelmingly in favour of computers doing less mistakes in traffic than humans. Furthermore, if we had mostly self-driving cars, bugs found could be easily deployed to all of the cars, fixing the problems once and for all. While human drivers require cultural changes to drive more safely, and that's way harder to deploy.
Yeah, no heartbleed for cars will ever exist! We can just push an update... like mid-trip... I am dumbfounded by the rose colored glasses on this HN regarding self driving.
It's wrong for the state to declare human driving illegal. I will prefer if we voluntarily adopt self-driving cars because of its huge benefits. I think that's the likely scenario, consider that horse buggies are currently only used as tourist attractions or by isolated communities like the Amish.
If there were so many deaths of innocent people with knives involved as there are deaths with cars involved, and if there were a safer alternative to knives, sure.
I find it smart that Waymo waited until the snowbirds left to start this. It will be comical when the blue hairs come back next winter and smoke a few of these self-driving cars.... or vice versa. Obviously I don't want anyone to get hurt, but... it is going to happen.
I don't know how it is USA, but in Poland the police and insurance companies actually do have real data on who causes the most accidents / the most severe accidents. And the facts are brutal: young inexperienced drivers < 25 years old are responsible for higher number of severe accidents than blue hairs, and this phenomenon is reflected in the insurance price they have to pay.
Uber is a rival. That they sort of suck at functioning as a consequence of their culture doesn't change the fact that the entire bet is that they get to a dominant position in self-driving cars.
The current human-driver-driven model is a stopgap; they lose a lot of other people's money on every ride. Possible outcomes:
- Uber achieves a credible 1st-3rd place dominance in self-driving vehicles. Investors, frat boys win.
- Uber doesn't. Keeps human drivers, prices surge, Uber tanks.
- Uber doesn't. Contracts with Waymo or someone else for cars. Survives, but doesn't have much more than a crappy brand-name. Joins the ranks beside Yellow Cab.
- Uber implodes in a puff lawsuits, acrimony and skunked beer kegs. Sucks for the investors, but, eh. Uber is a cancer.
it's not obvious to everyone but Uber is betting big on self driving cars as well, in form of massive data science effort and route algos. great write up on this by Ben Thompson
Because Uber is ride sharing that is getting into self-driving vehicles, their money comes from the rides we take. Alphabet (Google) Is literally just testing and developing their own self-driving car technology. As far as we know they have no business model, yet.
They are not as separate as you are implying. Alphabet could easily block Uber from using their self driving cars by preventing remote control and then set up their own competing service. Uber could sell their self driving tech to anyone.
However, the reality is Uber is only looking into self driving cars because otherwise it's going to destroy their business long term and they don't want their stock to tank today.
I guess except that it's not about some stolen files, but rather about people. Uber has the guy who built the Google's self driver tech, which is what really matters.
One implies the other, so in this unholy pedantic discussion, a winning move is to point out that being involved in selling is the same as being involved in buying.
Another tact: Otto is accused of being involved in selling secrets. Uber bought Otto. Otto is now Uber. Therefore, Uber is accused of being involved in selling secrets.
They also spend VC bucks funding operations (so their huge losses aren't just because of investments in technology, they are a direct result of operating at a loss).
Google's car clearly isn't going to work in the real world (can it follow detour signs? Get out of the way of emergency vehicles? Understand a cop directing traffic?). Why is Google pretending otherwise? Why are they doing this?
Google's car can already read the hand signals from bicyclists[0]. They have logged literally millions of miles of safe driving. They really aren't "pretending".
However, there are different classifications[1] of autonomous vehicles. Google's car is currently somewhere around level 3. It's true nobody has a level 4 car yet.
If we are building the safest transportation system, what role do driverless vehicles play? Wouldn't that be the narrative that actually saves the most lives?