Will the AI systems we’re building become so human-like that we won’t be able to tell the difference between machine and actual human being?

For those who haven’t seen Ex Machina, the article contains spoilers.

A bright and resourceful man builds an intelligent humanoid robot with artificial intelligence. The machine has already passed a normal Turing test and the last version of the android is so advanced it can even mimic human feelings. The final goal is to turn it into a powerful super-intelligence that would surpass all human intelligence.

The creator of the software, who lives in a luxurious, isolated home, brings one of his employees to perform a more advanced Turing test on the robot and see if it has conscience.

After the initial meetings with the programmer that’s testing it, the robot with the human-looking face starts expressing a romantic interest in him and a desire to experience the world outside. As time goes by, the tester becomes more connected to Ava, the robot, and after learning some disturbing facts about his boss, he even starts feeling sorry for “her” and decides to help “her” escape.

Unfortunately, at the end of the movie, Ava kills its creator and locks the programmer inside the house while it escapes to the outside world and is flown off in the helicopter meant to take the boy home.

Needless to say, in 2019 most parts of the story are still sci-fi for many of us, but it might as well turn into a reality in the years/decades to come. Society is starting to really warm up to the AI movement and there are a lot of people dreaming of building a machine that’s so intelligent, it will be able to do greater things than ever imagined.

Playing God is not for all

History teaches us that humans sometimes let themselves led by their egos. Anticipating the power and greater recognition that come with discovering or building something significant, they no longer want to contribute to the evolution, they want to be the ones driving it.

In the movie, the creator of Ava portrays the essence of the ego-driven culture of the tech world, sending a message that great egos drive great scientific advances. Sadly, his demise shows what happens when an ego faces the reality of its own extinction.

“If you’ve created a conscious machine it’s not the history of man… that’s the history of Gods.”

On a different level, the innocent programmer is also led by his ego. Firstly, he is manipulated by his boss that tricks him into believing he was brought there due to his amazing coding skills. When in reality, his lonely existence and online behaviour were the ones that made him the best candidate for the job.

Secondly, Ava manages to gain his trust by appealing to his willingness to believe that a super-intelligence, that’s also very beautiful, would really fall in love with him. Even though, throughout the movie, he questions his own decisions and the ones of his boss, in the end he still falls into the ego trap.

It’s easy to forget she’s a machine

Beyond the ego theme, the action of the movie evolves to that particular ending because the programmer starts feeling sorry for Ava and really believes she wants to stand in a traffic intersection and see humans. Finding out the creator of Ava is already thinking of the next version, which means Ava’s memory will be wiped out, he decides to take her with him back home.

Being connected to a search engine, Ava had access to an unimaginable amount of data that it could process, train and turn into behaviour. Behaviour that ultimately led the programmer, and perhaps even people watching the movie, into believing that Ava was actually capable of feeling love, sadness, fear, etc. But the system was only displaying the behaviour or emotion it learned it is normally displayed in certain situations.  Of course, the robot’s facial expressions were quite surprising, but Ava was not a human being, it did not have a conscience and it did not feel the emotions it was displaying.

However, we are curious to know how many of those who’ve seen the movie really felt that Ava deserved to be freed? Or its creator killed? Obviously, he was not an honest man and his reasons for building such a machine were not exactly pure, but in the end, can we actually say he should have been killed?

We have ahead of us a lot of possibilities when it comes to AI. It appears we are in a stage were we kind of get to choose how our future will look like. As stated by Max Tegmark, the author of the amazing book Life 3.0, the kind of future we want is currently the most important conversation and we should all join it.