Ex Machina Hidden Easter Egg and Interview with Murray Shanahan
I recently did a full tilt explanation and walk through of the movie Ex Machina. And while researching the ins and outs of the movie a month or two ago I came across a post out on Reddit that talked through a pretty cool Easter Egg that the writers inserted into the movie.
If you’ve seen the movie there is a scene where Caleb hacks into the security system and rewrites a few sections to allow the doors to unlock the next time Ava affects an overload of the power system. Yes? Well, during the movie, the code is prominently displayed on the screen for all to see. Here is the Python code that shows on the screen:
#BlueBook code decryption import sys def sieve(n): x = [1] * n x[1] = 0 for i in range(2,n/2): j = 2 * i while j < n: x[j]=0 j = j+i return x def prime(n,x): i = 1 j = 1 while j <= n: if x[i] == 1: j = j + 1 i = i + 1 return i - 1 x=sieve(10000) code = [1206,301,384,5] key =[1,1,2,2,] sys.stdout.write("".join(chr(i) for i in [73,83,66,78,32,61,32])) for i in range (0,4): sys.stdout.write(str(prime(code[i],x)-key[i])) print
So what, right? Except, that when you Which when you run this Python you get the following:
ISBN = 9780199226559
Ok, ok – now we are getting somewhere. What, pray tell, is ISBN 9780199226559? Glad you asked. Here, let me show you. It’s a book entitled: “Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds” by Murray Shanahan. Eh? What is this book about you want to know? Hahah, me too. Ultimately, its about Artificial Intelligence and Robot capacity to learn, think and become self aware. So I thought, hrm. Wonder if Mr. Shanahan would be kind enough to explain how he got attached to such an awesome movie? So I contacted him and he was kind enough to answer a few of my fairly ignorant questions regarding the movie, his book, and his consultation on the movie.
Taylor – “Did you know in advance that Alex Garland was going to embed an Easter Egg that linked out to your book in the movie?”
Murray – “First, a bit of background. Maybe you’re not aware that I was one of the movie’s scientific advisors. Alex emailed me back in 2013 to say he was working on a movie about consciousness and AI. He told me he had read my book “Embodiment and the Inner Life” and it had helped crystallize some of his ideas. He asked me to look at the script to see if I felt it hung together from the standpoint of someone working in the field, which it certainly did. (Alex has a brilliant, intuitive grasp of the relevant philosophical and technical issues.)”
Taylor – To you all reading this, I have to say, I had no idea he was involved with the making of the movie, I just followed the breadcrumbs that lead me his direction. What do I know!? Hahah. So, since the movie had only just been released when I started chatting with him I figured I would ask if he’d had the chance to see the movie yet?”
Murray – “Much later, he [Alex] invited me along to a post-production facility in Soho to see some of the footage. That’s when he invited me to write some code to go into the window Caleb was typing in, suggesting I make it some hidden allusion to my book. So I wrote the fragment of Python code you see in the movie. That’s how the Easter Egg came about.”
Taylor – Hahah, no way, so Murray is the one that wrote the code himself. How cool is that?!? Murray also provided me with a link to an awesome interview with Alex Garland and Murray’s involvement in the movie – you can find it here if you are interested: http://www.wired.com/2015/04/alex-garland-ex–machina/
Taylor – So that you guys are all aware, Murray Shanahan is a professor at Imperial College in England, which is our equivalent of MIT. His role there is a Professor in Cognitive Robotics… and being utterly clueless as to what that even meant, I figured I’d ask.
Murray – “I want to understand how human brain function can be used in the field of AI. We are not really anywhere near the vision of Asimov’s robots which inspired me as a kid, so I decided we needed to understand the brain better and started working more and more on neural networks. At the same time, I became very much interested in the possibility of AI having consciousness. The latter is now a respectable and well established area of scientific study.
“If you want a job in robotics and AI, a degree in a scientific discipline is essential. You should study computer science, maths, physics or even neuroscience. Everybody should also learn programming. If you are graduating soon, it is a really good time to be entering this industry. AI is a hot field at the moment in terms of career prospects. There is a lot of industrial interest, particularly in machine learning and computer vision. Those are two areas of AI which have really taken off. There are also lots of AI startups, filling a gap in the market by selling expertise in the field to large corporations.”
Taylor – “If you were invited to come investigate a potentially cognizant A.I. how would you exact a test like this?” To which Murray pointed me to a longer very interesting discussion that he had had out on Edge.org and thought it was such an interesting discussion I would include large swaths of it here.”
Murray – “Just suppose we could endow a machine with human-level intelligence, that is to say with the capacity to match a typical human being in every (or almost every) sphere of intellectual endeavour, and perhaps to surpass every human being in a few. Would such a machine necessarily be conscious? This is an important question, because an affirmative answer would bring us up short. How would we treat such a thing if we built it? Would it be capable of suffering or joy? Would it deserve the same rights as a human being? Should we bring machine consciousness into the world at all?
“The question of whether a human-level AI would necessarily be conscious is also a difficult one. One source of difficulty is the fact that multiple attributes are associated with consciousness in humans and other animals. All animals exhibit a sense of purpose. All (awake) animals are, to a greater or lesser extent, aware of the world they inhabit and the objects it contains. All animals, to some degree or other, manifest cognitive integration, which is to say they can bring all their psychological resources to bear on the ongoing situation in pursuit of their goals—perceptions, memories, and skills. In this respect, every animal displays a kind of unity, a kind of selfhood. Some animals, including humans, are also aware of themselves, of their bodies and of the flow of their thoughts. Finally, most, if not all, animals are capable of suffering, and some are capable of empathy with the suffering of others.
“In (healthy) humans all these attributes come together, as a package. But in an AI they can potentially be separated. So our question must be refined. Which, if any, of the attributes we associate with consciousness in humans is a necessary accompaniment to human-level intelligence? Well, each of the attributes listed (and the list is surely not exhaustive) deserves a lengthy treatment of its own. So let me pick just two, namely awareness of the world and the capacity for suffering. Awareness of the world, I would argue, is indeed a necessary attribute of human-level intelligence.
“Surely nothing would count as having human-level intelligence unless it possessed language, and the chief use of human language is to talk about the world. In this sense, intelligence is bound up with what philosophers call intentionality. Moreover, language is a social phenomenon, and a primary use of language within a group of people is to talk about the things that they can all perceive (such as this tool or that piece of wood), or have perceived (yesterday’s piece of wood), or might perceive (tomorrow’s piece of wood, maybe). In short, language is grounded in awareness of the world. In an embodied creature or a robot, such an awareness would be evident from its interactions with the environment (avoiding obstacles, picking things up, and so on). But we might widen the conception to include a distributed, disembodied artificial intelligence if it was equipped with suitable sensors.
“To convincingly count as a facet of consciousness, this sort of worldly awareness would perhaps have to go hand-in-hand with a manifest sense of purpose, and a degree of cognitive integration. So perhaps this trio of attributes will come as a package even in an AI. But let’s put that question to one side for a moment and get back to the capacity for suffering and joy. Unlike worldly awareness, there is no obvious reason to suppose that human-level intelligence necessitates this attribute, even if though it is intimately associated with consciousness in humans. It seems easy to imagine a machine cleverly carrying out the full range of tasks that require intellect in humans, coldly and without feeling. Such a machine would lack the attribute of consciousness that counts most when it comes to according rights. As Jeremy Bentham noted, when considering how to treat non-human animals, the question is not whether they can reason or talk, but whether they can suffer.
“There is no suggestion here that a “mere” machine could never have the capacity for suffering or joy, that there is something special about biology in this respect. The point, rather, is that the capacity for suffering and joy can be dissociated from other psychological attributes that are bundled together in human consciousness. But let’s examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animal’s awareness of the world, of what it affords for good or ill (in J.J. Gibson’s terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving towards it. Against the backdrop of a set of goals and needs, an animal’s behaviour makes sense. And against such a backdrop, an animal can be thwarted, it goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.
“What of human-level artificial intelligence? Wouldn’t a human-level AI necessarily have a complex set of goals? Wouldn’t it be possible to frustrate its every attempt to achieve its goals, to thwart it at very turn? Under those harsh conditions, would it be proper to say that the AI was suffering, even though its constitution might make it immune from the sort of pain or physical discomfort human can know?
“Here the combination of imagination and intuition runs up against its limits. I suspect we will not find out how to answer this question until confronted with the real thing. Only when more sophisticated AI is a familiar part of our lives will our language games adjust to such alien beings. But of course, by that time, it may be too late to change our minds about whether they should be brought into the world. For better or worse, they will already be here.”
Taylor – So I think we now all understand why Murray works at Imperial College and we do not. Hahaha. Thanks so much for taking time out to chat with us all a bit about Ex Machina and the concepts behind it all.
Edited by, CY