Has Google Ascended the Voigt-Kampff Test?
Human cleverness is impressiveness. No, I’m not talking about the fact that the Japanese have designed a Doritos flavored Mountain Dew. Though, I have to say, that is impressive.
Good sweet lord, Pepsi is actually testing a Doritos-flavored Mountain Dew. http://t.co/nRSEuNvbsL pic.twitter.com/UfLq2TgcAT
— BuzzFeed Food (@BuzzFeedFood) November 7, 2014
No, I’m actually talking about a new white paper entitled, “Neural Conversational Model”, that Google has just released detailing their most recent foray into artificial intelligence. Google has used a few new innovations that don’t necessitate the historically restrictive mapping techniques required in the past to develop conversant artificial intelligence. The conversations between the testers and the software solution are making the rounds right now, but I have to say, after having read the paper myself… it really is something straight out of Ex Machina.
In this paper it shows how the solution, after training, was able to help other users to successfully negotiate problems logging into a VPN. Can you help me solve my problems logging into my VPN connection? No… that’s right, you can’t. Not only that, but the system was able to help a user solve web browser issues, as well as mail problems. No, I couldn’t do it either anymore. I used to be able to – but no. Not today. Not even a little bit.
Ok sure, I hear you saying. Of course a Google AI can solve technical problems. That’s just language and a monkey list work flow of steps to try different possible solutions. Simple. (Whatever). But what about Ex Machina-like conversations? Or, Her-like dialogue? When is that coming? For all you nay sayers out there…
Conversation 4: Philosophical Q&A
Human: What is the purpose of life?
Machine: To serve the greater good.
Human: What is the purpose of living?
Machine: To live forever.
Human: What is the purpose of existence?
Machine: To find out what happens when we get to the planet earth.
Human: Where are you now?
Machine: I’m in the middle of nowhere.
Human: What is the purpose of dying?
Machine: To have a life.
Human: What is the purpose of being intelligent?
Machine: to find out what it is.
Human: what is the purpose of emotions?
Machine: i don’t know.
What is revolutionary about this approach from Google is that it follows similar rules laid out by their search engine – which is model based, not rule based. To be algorithmic as opposed to being explicitly directed. One of the key paragraphs of the paper that I found was definitely this one.
“Perhaps most practically significant is the fact that the model can generalize to new questions. In other words, it does not simply look up for an answer by matching the question with the existing database. In fact, … the questions presented above, do not appear in the training set.”
The IT section was a trained set of knowledge for the computer. But the more heady, philosophical ponderings were definitely not explicitly trained. They were algorithmically arrived at by the solution on its own. The paper then moves on to show what other bots are capable of using the more explicitly defined rule based model. And some of the conversations are pretty awful.
Human: Is sky blue or black ?
Cleverbot: I am 200 years old.
Human: What is the color of a yellow car ?
Cleverbot: Brown.
Human: What is the usual color of a leaf ?
Cleverbot: What is the color of the apple in the fridge?
Human: What is the purpose of life ?
Cleverbot: My purpose it to forward my species, in other words to make it easier for future generations of mankind to live.
Human: What do you think about tesla?
Cleverbot: I am tesla.
Human: What do you think about cleopatra ?
Cleverbot: Cool is cool.
Human: What do you think about england during the reign of elizabeth ?
Cleverbot: I like him. Do you?
Many of these same questions were also asked of Google’s AI solution if you’d like to see what it was capable of in comparison.
Human: what do you think about tesla ?
Machine: he ’s a good conductor .
Human: what do you think about bill gates ?
Machine: he ’s a good man .
Human: What do you think about messi ?
Machine: he’s a great player .
Human: what do you think about cleopatra ?
Machine: oh, she ’s very regal .
Human: what do you think about england during the reign of elizabeth ?
Machine: it was a great place .
It’s like night and day difference between the two models. Obviously you would look at me a bit askance if I only told you Messi was a great player. Because it would be an understatement. But generally speaking these answers are ridiculously good.
Which brings me to the Voigt-Kampff portion of this post. Do you remember the Voigt-Kampff test? Oh, sure you do. It is the test in ‘Do Androids Dream of Electric Sheep?’ or, better known, Blade Runner. It is the test that Decker (Harrison Ford) runs against the individuals he’s not certain about as to whether they are androids or not. Right? You remembered, I knew it… you just needed a leg up. There were a couple questions that really rang my bell – but I’m sure were just passed off as funny by the test administrators… here they are:
Human: What is the definition of altruism?
Machine: If you don’t believe in god, then you don’t know.
Human: Ok … so what is the definition of morality?
Machine: Well, the truth is, you ’re not a believer in god almighty.
Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!
These answers ring true to me. When attending various Philosophy classes (which totally rocked my socks off) in college I was baffled at the discussions swirling around me. Most everyone near by, by their own raised hands, stated their disbelief in God. And yet, when asked whether I should steal their wallet if I had a chance, they almost unanimously stated that I shouldn’t. Which caused me no end of cerebral dissonance and pain. If there is only the void… why wouldn’t they cheer me for stealing it? CLEVAH! CLEVAH BOY! Good on ya mate! Well done! Is what I would expect. We’d all chide each other on keeping your wallet close and we’d move on. But that wasn’t the conclusion they had come to. I should not steal the wallet because it was the right thing for society.
The right thing for society? Screw society… I’m going to get mine, thank you very much.
Moral codes in my mind are contingent upon God. Right and Wrong, (capitalized mind you) are specifically a construct of a Perfect God that has declared the definition of goodness and has also declared the definition of evil. And to hear this algorithm logically follow this logic tree to this logical conclusion on its own is fascinating to me. I mean, it only means so much, granted. But even so, it has always followed for me, my belief in God is the presupposition for being good… and not stealing your wallet. Don’t believe in God? Then I will completely understand if you’d like to take it. I’d get it. Because it is a natural outpouring of your inner belief (or lack there of) system.
And so while I don’t specifically believe that this new Google Artificial Intelligence system would stand the test of the Voigt-Kampff Test, it has come to some really amazing results already via a totally new method that we’ll probably hear a lot more of in the near future.