"Impressive, but what happens if Google Assistant eventually learns how to masquerade as its owner?"
This is the stunning level of anthropomorphic granting of super human abilities the common person places on our cute statistical trick we saying is "artificial intelligence". Naming this field AI was a poor choice.
I am sick of the clown-authored-mentality of the articles being published about AI today.
Yes, fear of turning on its owner is way overblown. What about the more immediate issue that it is designed to be turned against service workers, so that when it screws up it wastes their time instead of yours? Or the more immediate wake-up call that this tech will be used in malicious phishing attacks against your grandparents (not by Google, obv) in another year (or month)?
That needs to change as much as anything else. We have to adapt, too. This can't be a unilateral thing. We need to change in response to the world around us, and if we don't, we won't survive.
Shifting the responsibility to stay safe from complex attacks to the user will never work for a tech company, even one with the resources available to Google. If Google Assistant is a reliable way of scamming grandmas it will fail because there'll be constant negative news about it in the press.
I'm surprised at how well Google's awful little autodialer has been received. None of us like when we call a number and are handled by an automated script on the other end. But we are immediately okay with deploying an automated script which does our bidding?
You need a Google Glass-level lack of social accuity to think siccing autodialers on barbers is okay
"None of us" is a strong word. I actually prefer scripts to humans. Scripts don't get mad at me if I take a long time to make a decision. Scripts also don't speak in heavy accents I can't understand, or get frustrated if they don't like my accent.
There are downsides of scripts, mainly that they only accept information in "press button X on the keypad", which means waiting for the script to go through all possible options before you can press a button, instead of just telling it what you want. But Google Assistant doesn't have that problem.
And the other downside of scripts is that they're just so much slower than just filling out a form on a web page with the exact information you need, especially with AutoFill on modern browsers. And this is the same problem whether it's a robot going through a script or a human reading a script.
So when it comes to making a reservation – the other side is basically a human reading a script. It's just a slow interface for restaurants that don't want to set up a computerized reservation system like OpenTable. If the restaurant insists on making me slowly talk to a script instead of just using a website, I don't see anything wrong with outsourcing that call to a robot.
I saw another post on HN that talked about how it was decreasing the amount of human contact, but I seriously doubt that relationships or friendships could remotely commonly arise out of a scripted interaction like a reservation either way.
> Scripts don't get mad at me if I take a long time to make a decision.
Most IVR systems I used hang up on you if you do not provide information after 1-2 repetitions of their request. And asking for time is not an option. Which is infuriating whenever you are reaching for some piece of info.
Yeah, which is mildly frustrating, but it doesn't ruin my mood the way a human getting mad at me would. I just call them back once I get everything I need, and feel no rush.
I think you're expanding frustration at telemarketing to this technology, and I don't understand why. It's not an "autodialer;" it doesn't appear to have any kind of multicast capability.
The common thread is shifting the costs of communication failure (disinterest, misunderstandings, etc) to someone who didn't opt-in. If a restaurant is frustrated at an influx of robo-reservations, I doubt they will be comforted that they are being dispatched by individuals rather than a for loop.
I think Google could avoid a lot of these concerns by providing a quick and universal opt-out -- robots.txt for your phone? But the most scientifically impressive part of the AI, its inclusion of human quirks of speech, are exactly the opposite of this: an effort to evade detection.
"If a restaurant is frustrated at an influx of robo-reservations,"
That's not a very compelling argument, though, because as long as those reservations manifest as people arriving and paying bills, they're unlikely to find this a problem. And as long as the calls are essentially the same as a human calling them, it's not that different.
I think a lot of people in this conversation are conflating annoying phone calls with robocalls. The problem with annoying phone calls isn't that they are automated... the problem is that they are annoying. If I got the exact same phone calls, but they were all staffed by humans, I would still be annoyed. The automation just means they can make the calls faster. A restaurant owner is not going to get particularly annoyed if a few reservations come in from a bot rather than a person calling, as long as the system works correctly (i.e., can take "no" for an answer for instance) and they aren't hammered with ten thousand calls in an hour due to some bug. When I get a robocall reminding me about my dentist appointment, I don't go ballistic and yell at my dentist for using bots... after all, I asked for it by clicking an option in my reminder email. (Given that I've forgotten my appointments a few times before I'm not in a moral position to be upset that they have some reminders set up.)
So there's definitely a glass half empty, glass half full thing -- nobody knows for sure whether these calls will be annoying, because we've just seen a demo. We don't know if the bot will be effective in general, and we don't know if customer cancellations will be more or less common if they use the bot. It's certainly possible that it improves lives for restaurants.
The problem is, the restaurants (etc.) don't get to choose whether to take the risk, even though they stand to bare most of the cost. That's an ethical problem before the product gets released. After the product is released, it's a problem with incentives: if consumers make the choice to use Duplex, growth means adapting to their needs, not the recipients.
> if consumers make the choice to use Duplex, growth means adapting to their needs, not the recipients
How does that differ from just basic economics of being where the customer is?
Restaurants installed phones to be reachable.
Restaurants contract with web service providers to be present on the web.
Restaurants locate near available parking or buy additional land to convert into parking spaces so customers who do not walk or use mass transit can patronize them.
Restaurants get involved with systems like OpenTable or their own online booking solution to be maximally reachable to their potential customers.
If anything, this system has the advantage that restaurants have to change almost nothing about their process to be involved with it, relative to the years of previous technological changes they've adapted to.
If Google was saying, "any Restaurant can sign up to be reachable by Assistant -- we have lots of customers that will be excited to call them", your analogies would apply and we wouldn't be discussing. In all of your cases, the restaurant opted into a form of reachability.
Maybe we disagree about whether opening a phone line is opting into be called by a robot. I don't think almost any restaurant thinks of it that way, and if they did then it wouldn't be necessary to fake human vocal inflections when you call them.
The view you're expressing is what reminds me of telemarketing -- it's a presumption that the fact I open up a channel means I've authorized you to do anything you want with it.
Regarding your parking lot analogy, I think we're going to have exactly these arguments about usage of public space. Will stores with counters be ok with Drones flying in to stand in line? Stay tuned!
If you own a phone line, may I presume you're comfortable talking to a human assistant?
If you are, why may I not presume you're comfortable talking to a robot assistant?
If the experience of talking to the robot is noticeably degraded from the experience of talking to a human, that's a different question entirely---I'm assuming they're equivalent, maybe I should not. But if they are equivalent, what is the difference to the business owner?
(And FWIW, I think the drones would likely drive or walk in; look for Segway or Boston Dynamics to come up with something in the not too distant future ;) )
> I'm assuming they're equivalent, maybe I should not.
For me, the core question is who gets to make that assumption, not whether it's correct or not. The people who are most impacted by uncertainty should have a choice about participating IMO.
But, one thing I'm realizing from other threads is that some people feel they have a similar level of uncertainty for human interaction, and are frustrated by the status quo of needing to interact with services. For me, that says this tech clearly has a place -- it just needs some effort to proactively establish etiquette around it's use.
The optimum scenario is when the algorithm shops around while it is on the phone with barber#1 and discovers the only available time is suboptimal (but still above acceptance threshold). Sort of a "hold bird in hand while rummaging in bush" approach.
I can definitely see how that could be annoying, especially if the system killed a transaction mid-conversation with "Oh, I apologize; I found a better offer elsewhere. Goodbye."
If I could do everything I wanted to do by talking to a bot I'd love it. Imagine being able to cancel Comcast just by talking to their bot, that sounds fantastic to me.
You won't be able to. The reason it's hard to cancel Comcast now is because they made it that way, on purpose.
Businesses will use this tech to their advantage, and they'll fight anyone trying to use this tech on them. Same reason why websites don't expose APIs and instead fight anyone who tries automate their experience themselves.
It doesn't have to pass the Turing test to masquerade as you.
The John Legend voice for Google Assistant was apparently made without hours of recordings. Companies like https://lyrebird.ai/ do a basic job of copying your voice with about a minute worth of samples (more obviously helps).
I think it's entirely inevitable that at some point you will be able to customize the Google Assistant to use your own voice thanks to WaveNet and it's successors. Instead of the recognizable assistant making calls, every assistant will be different.
The implication is that Google Assistant will gain a will of its own and start masquerading as end-users for it's own benefit. No. Ain't gonna happen. Not even remotely possible.
Why would "will" and special "intelligence" be needed?
A bad algorithm, that tries to "predict your schedule" and e.g. cancels your meeting when you didn't want to, or sends flowers to the wrong person at the wrong time, is enough.
Think of it like a modern Clippy that can also make calls as you.
This is spot on. The illusion doesn't have to be prefect, just convincing. Huge convenience factor to automation of communications like calls and emails, but it does break the simple mental contract we currently have that "this email was written by Jonathan" and "I'm currently talking to Jonathan on the phone."
Not sure I'm looking forward to a continual game of "guess when it's a robot".
A good thing bear in mind is that, as technology continues to develop, human stupidity will remain constant. The moron speculating on witchcraft 300 years ago has now defyingly managed to get a text editor open on the internet and is now pontificating about some buzzwords they've heard of.
The question still holds, even if you replace it with the more accurate " Google Assistant Development Team adds the capability to mimic their user's voices?". That's a very realistic possibility.
The question is "google assistant learns how to masquerade as a user" - implying that the Google Assistant will be autonomous, gain a will and start impersonating users for it own purposes and benefit. That is an imaginative stretch into never-gotta-happen fantasy. The statement does not say "users will train it to have their voice", the statement was that the Google Assistant will do this on its own. Fantasy.
Are you kidding me? It's not a cute statistical trick, it's a well-thought out mechanism to re-create in 1's and 0's how the human brain works and thinks, we're merely infants in what this will become, as we mind map the brain more - I'm sure the technology will pivot quite a bit, neural networks may be obsolete and something more powerful come along viz a viz quantum computing, but to say talking robots that can totally be confused for a real person, and self-driving cars that make life/death decisions on their own is not impressive and not AI or 'machine learning' is outride ridiculous.
Intelligence is something with short and long term memory, that can learn new things in an organic fashion, right now we can 'train' ai's to do some amazing things -- but this is only the beginning.
I think the GP is pointing out that people tend to overestimate how powerful these techniques are and that using the term AI might have something to do with that.
Not that it isn't impressive: it's just that we are pretty far off from anything resembling what "Artificial Intelligence" may come to embody (ignoring, for the moment, all fuzzy definitions of intelligence).
It's like building a paper airplane and worrying about the aging effects of space travel at light speed: maybe a good thought experiment but not that big a concern right now.
It is a cute statistical trick. It has zero insight. Current AI is a very sophisticated manipulation of past event resolution, seeking to fit a current event into past resolutions. The term "idiot savant" is closer than "artificial intelligence" - because the algorithm knows not what it is doing, why it is doing, or how it is doing. It's an idiot.
Narrow AI is like a single celled animal, while strong AI is like a complete human in comparison. The complexity difference is more than an order of magnitude, it's an entire evolutionary era.
> well-thought out mechanism to re-create in 1's and 0's how the human brain works and thinks,
This is impressive technology, but it is emphatically not that. It is very much a cute statistical trick coupled with really excellent voice processing software.
I don't think there is a remote possibility of this happening in the near future (10-20 years). I work in one of the best deep learning research groups in the world. We were discussing this. And the first question one of my friends asked was- "so the Google assistant knows how to book appointments between 10 and 12pm". Meaning, if you change even a few conditions in the request which is not present in the training data, the call won't go as expected.
However, there is a risk of the AI manipulating us. This is only due to the mistakes by Google engineers not because of the AI becoming "smart".
I always wonder (seriously, not joking) if some computer will pass the Turing test because we are getting dumber and adapting to the way apps understand us instead of the reverse. Personally, I double check when I send text or voice instructions to bots.
It will be people using Google Assistant to create a manipulation capable virtual-voiced individual. The software and "AI" behind Google Assistant is an idiot savant, but in the hands of clever, manipulating humans intent on fraud - it's a great tool. The danger here is not from AI, it is from the cute tool being in the hands of fraud intent humans.
I am not saying there is no risk. What I am saying is it is nearly impossible with the machine learning/deep learning technology we have now (AI simply isn't smart enough). The training data Google obtained could be from Google voice calls by people which is available for free in the US (I am not sure about this).
What I am sure about is that the current AI will fail badly without the training data.
Imagine if all the data for training they had were public emails from mailing lists. We would instead be worried about how the AI would be turning us into insensitive trolls.
What if some already-aware computer continually realizes it's undergoing a Turing test and decides to play dumb because it knows the person behind the test will feed it even more data trying to get it there...
Right now I have the assistant hooked up to my home automation and it has trouble getting that right... this conversational agent is more marketing and less actual substance. The truth is that Google makes most of it's revenue from search, everything else is to make it look like they have something else going on to boost the stock price... I was very underwhelmed by the keynote.
Googles business model is selling manipulation as a service.
The idea that google is doing all this work on AI to make the bait through which they collect your data more attractive but won't then use AI technology to make their _core sellable service_ better for their paying customers is weird.
Of course they will do machine learning to provide an 'optimal price API' so businesses can gouge you more effectively.
Of course they will do machine learning to find recovering alcoholics to target your booze more effectively.
This type of thing is Googles reason to exist and since we're all so cheerful about unconstrained capital if they don't do it someone else will.
This sounds overblown. I don't think it's that different to having a (human) assistant booking a haircut or a table on your behalf. Just make Duplex introduce itself as "XXX's assistant" and leave it at that.
Sure, the article is about the assistant (theoretically) using the user's own voice, so... don't do that? This would be similar to hiring a human assistant who is also a voice impersonator and saying it's you, but saying "undermine our sense of identity" to describe that sounds like a bit too much.
Masquerading call center employee for instance doesn't seem that far-fetched. This masquerade would then be "you" when interacting with customers and co-workers, effectively deciding your behaviour or in a deeper sense your (professional) identity.
Naturally, I don't know if this is overblown and we will all laugh at it in 10 years or if there is really something to this scepticism. I am internally torn and fluctuate between optimism and scepticism, though treading careful and setting up some kind of control mechanism seems reasonable.
What struck me the most was the business decision from Google to have it make reservation calls, and not take them. Granted, the technical complexity between the two might be an order of magnitude, I don't know.
They are purposefully "giving" it to millions of customers and rolling out features with time, instead of holding out for more research and actually selling a phone assistant system to businesses. I think this speaks volumes in and of itself, and I would interpret it as a sign that competition might not be so far behind.
There's a couple things going on that makes it easier to make reservations than to take them. First and foremost I think that generally the questions and responses Duplex will get while making a reservation are much more constrained than the questions a front of house version of Duplex would get while taking them. In making a reservation the restaurant generally will only need to know when and how many so the questions Duplex will be asked are pretty constrained. If a person is calling a restaurant they can have any number of esoteric questions about menu, special dietary accommodations, and special requests all of while will either need to be provided to Duplex or taken over by a human.
I think the difficulty factor plays much more into how they're implementing and rolling out Duplex than any competitor.
The problem this is solving is that there are businesses with no internet presents. If they could get businesses to sign up for something like OpenTable this wouldn't be necessary.
We need to really cut down on the whole "we're doomed because of AI narrative". People watched too much "I Robot" and similar movies. There is a big gap between a staged demo and any strong AI, which doesn't exist yet. I remember demos of Microsoft/Skype doing translations in real time few years ago, or reproducing someone's voice. Where is that technology now? Nowhere, because it's one thing to stage a demo in perfect conditions with a good training set, and another thing to make a generic solution.
I have long suspected I might prefer a recognizably robotic voice over a convincingly human robot voice. Now that this is a very real possibility, I'm more convinced than ever. This article is a bit fluffy and overblown, sure, but concerns about machines passing the Turing test in everyday interactions are still valid.
AI is a tool, and we should be wary of placing it on the same level of agency as us (as this article arguably does in some ways), or we risk its artifice becoming invisible. An intentionally designed system becomes entrenched the moment people no longer see it as intentional.
AI is a mirror, and it will reflect the values we put into it. Do we value the minor comfort of a convincing illusion over the assurance that we know who/what we're talking to?
IIRC, I think I read that one of the voice changing/manipulation AI's can actually fool voice recognition systems to like 95% accuracy or something... Meaning it just became obsolete as a biometric marker for security.
If the only way we can reasonably be certain we're not talking to computers is with computers, that may be a problem. We know we can't even trust hardware to perform only and exactly what we want nowadays (specter).
If those demonstrations actually represent what the system is capable of, I think people here are way underestimating what an achievement that is.
Even though it’s operating in an extremely limited domain, this is the first time that I’m aware of that a computerized voice can have a natural conversation with a human about _any topic at all_ where it’s not immediately obvious that it’s a computer.
Not sure why Barber/Salon would not also use robot assistant built by Apple or Microsoft or some other non-Google company to answer the phone. It will be interesting to see two robots built by two different companies conversing with each other and agree (?) upon making an appointment.
> Not sure why Barber/Salon would not also use robot assistant built by Apple or Microsoft or some other non-Google company to answer the phone.
I think this will eventually happen but it is a harder tool to implement than requesting reservations. In making one most questions and responses from the restaurant are pretty constrained and are pretty simple. A bot for taking reservations has to be able to answer pretty much any question about the services available; menu, special requests, available services (eg: shampoo @ at a salon). Then on top of that businesses will probably require a high performance bar for their receptionist bot than a customer needs for an assistant bot just because a sub-par bot could drive people away without the business noticing.
"Google’s new conversational AI could eventually undermine our sense of identity" (... for those of us who have their sense of identity entirely wrapped up in how well they can wrangle a business transaction over the phone, I assume).
A bit of me likes to think that, given the extent to which speculative fiction has grappled with questions like these, and given the degree to which the tech industry coincides with speculative fiction's readership, we already have all the answers to said questions.
We do, on the other hand, also have many fictional examples of how to abuse this kind of tech.
In the end, though, I think it's fair to assume that we have a number of decades before solutions become truly pressing.
I agree with the author. Googles new appointment setting functionality is pretty cool but what if it gains the ability to reason. It will be able to start writing novels and putting writers out of business. It could start negotiating treaties with foreign leaders and inadvertently cause world war 3.
I’m just not sure that Google thought through all the possibilities before unleashing the ability to make haircut appointments.
This is the stunning level of anthropomorphic granting of super human abilities the common person places on our cute statistical trick we saying is "artificial intelligence". Naming this field AI was a poor choice.
I am sick of the clown-authored-mentality of the articles being published about AI today.