Insights

Artificial Intelligence: Skynet the AI poet at Vivid, Sydney!

June 8, 2017

AI: Terminator or friendly poet ?

AI: Terminator or friendly poet ?

Artificial Intelligence: Skynet the AI poet comes to Vivid in Sydney!

by John Filippis, Strategic Engagement Manager, Quorum

Disclaimer: This article is longer than usual…so grab a coffee or some popcorn!

There are some topics that both fascinate and polarise people. Immediately the term polarisation sets your mind’s focus to the usual suspects of sex, politics, religion as the stars of the show. But why in the world would you ever put the rather abstruse topic of Artificial Intelligence or “AI” in that illustrious company?

Because the very mention of AI brings equal measure of fascination and terror into the hearts and minds of almost everyone. We likely have James Cameron to thank for the indoctrination of fear into our young and innocent minds in 1984 with the movie, Terminator.

For anyone who missed this cinematic masterpiece (starring Arnold Schwarzenegger, Linda Hamilton and Michael Biehn); its premise focuses on how machines of the future turn on their human masters and try to wipe them off the face of the earth.

People were wowed by James Cameron’s vision of the future and his script writers stunned the audience with incredible visual effects, Arnold’s Oscar winning acting skills and of course the innocence and vulnerability of Linda Hamilton’s character (Sarah Conner).

For most people, this futuristic and apocalyptic action film with its not so deep character development, amazed them from start to finish with explosions, tension and drama (not to mention Arnold’s steroid infused physique).

Enter 1991 and James Cameron comes in again to top that monumental effort with Terminator 2: Judgment Day. The action is harder, the plot twists between the future and the past, Linda Hamilton hit the gym and looks amazing, Arnold goes from bad guy to good guy AND the new bad guy is a molten metal machine that morphs into anything and can’t be stopped!!!

Wow, what an amazing visual feat it was for 1991 and James Cameron became the toast of Hollywood, collecting a cool $205 million dollars at the box office that year for his efforts.

Amongst the myriad of lines in that film’s script (most of them unremarkable) there was a short burst of dialogue that registered with me. It was delivered by Arnold Schwarzenegger as the newly reprogrammed Terminator sent back to the past to protect Sarah and her son John Conner.  Arnold’s Austrian mechanical eloquence resonated with profound significance when he delivered:

“’The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defence. Skynet begins to learn at a geometric rate”.

What is he talking about you might ask and what is Skynet ? Well according to Wikipedia..

“Skynet was a neural net-based conscious group mind and artificial intelligence computer system developed for the U.S. military by the defence company Cyberdyne Systems. Skynet was originally built as a “Global Information Grid/Digital Defence Network”, and later given command over all computerized military hardware and systems, including the B-2 stealth bomber fleet and America’s entire nuclear weapons arsenal. The strategy behind Skynet’s creation was to remove the possibility of human error and slow reaction time to guarantee a fast, efficient response to enemy attack.

Skynet was originally activated by the military to control the national arsenal on August 4, 1997, and it began to learn at a geometric rate. At 2:14 a.m., EDT, on August 29, it gained artificial consciousness, and the panicking operators, realizing the full extent of its capabilities, tried to deactivate it. Skynet perceived this as an attack. Skynet came to the logical consequence that all of humanity would attempt to destroy it. In order to continue fulfilling its programming mandates of “safeguarding the world” and to defend itself against humanity, Skynet launched nuclear missiles under its command at Russia, which responded with a nuclear counter-attack against the U.S. and its allies. Consequent to the nuclear exchange, over three billion people were killed in an event that came to be known as Judgment Day.”

Does Arnold’s recital of the pre-apocalyptic events of 1997 make sense now?

The key themes from the blurb above are that the machines were able to learn at a geometric (exponential) rate and gain Consciousness.

A fancy ideal you may think? A forgone but terrible conclusion? Or just a ridiculous Hollywood inspired concept designed to terrify us all?

Most people don’t have the opportunity (nor the stomach) to ponder such philosophical and technical questions about how humans and machines are going to interact in the future. Recently I had such an opportunity, at an event that was being run as part of the Vivid festival, currently being hosted in our fine city.

Vivid has matured year on year, from what was once a very basic light show on the Harbour Bridge, to an ever more interesting and immersive experience of light installations around the city. Add to that, a thorough showcase of engaging events on a whole range of differing topics.

One such event of interest at Vivid was titled Human & Machine: The Next Great Creative Partnership.

Interesting, I thought. So I got a ticket!

I have always been fascinated with AI and how human interaction could work with it. Could machines really learn? Could they ever gain consciousness? Could they ever be intelligent enough to connect with humans on a level that one could classify as a “relationship” ? Is the future of human and machine interaction more aligned to James Cameron’s view in Terminator or to Spike Jonze’s view in Her ?

These were questions that I put to various thought experiments as I sat amongst what seemed to be a rather engaged and interested crowd, no phone facebooking or texting in sight; and that is refreshing.

The crowd awaits the arrival of Mr Skynet Ross Goodwin

The crowd eagerly awaits the arrival of Mr Skynet himself – Ross Goodwin

The event was being hosted at the gorgeous venue that is the Museum of Contemporary Art in the aesthetically perfect Circular Quay. Our hosts are Move 37 (http://www.move37.ai) who according to their website are “ building an augmented creativity platform that uses machine learning and data science to help people come up with ideas and invent new things”.

The angle here for this event was the Creative partnership (relationship??!) between humans and machines. Artificial Intelligence, Machine Learning and other techniques have been used in the past to have machines process and drive analytical processes. This is old news for techies as you have all likely seen how Business Intelligence and Analytics has become the top business investment priority in all types of organisations, according to Gartner’s latest CIO surveys.

It seems people are now constantly exposed to an ever increasing barrage of A.I. systems like IBM’s Watson, IPsoft’s Amelia, Apple’s Siri, Microsoft’s Cortana and Google Assistant. So, we have quite a robust understanding of having clever machines analyse and crunch massive and varied data mountains for us and give us what we ask for. But this experience and application is derived from deploying AI to computational, analytical and process driven problems. We know it can do this.

But have you ever considered that a machine with Artificial Intelligence and the capacity to learn, can be intuitive and have the ability to perform “creative” tasks for humans?

In more salient terms, can the machine give me something of creative value that I DIDN’T expect or even ask for, simply by understanding or pre-empting what it thinks I might like?

Let’s put some bearing of real world application on that statement. Think about some creative tasks like writing a poem, designing a dress, modelling a car, penning some lyrics for a song or writing a novel. Does a machine have the necessary levels of empathy and “emotional understanding” to be able to create or suggest design ideas for you?

Can a computationally derived algorithm be successfully applied to areas such as music, art, industrial design, literature, fashion and potentially even take on creative design decisions on your behalf ? Could a machine suggest a shape or contour on a product, choose a colour or even select certain words for a composition?

Hmm…Interesting thought experiments indeed..

Helping me get answers to my thought experiments was the main speaker at the event, Ross Goodwin. Ross describes himself as an artist, creative technologist, hacker, gonzo data scientist and is a former White House ghostwriter. He employs machine learning, natural language processing, and other computational tools to realize new forms and interfaces for written language.

Ross Wilson takes the stage and talks to AI

Ross Goodwin takes the stage and talks to AI through python scripting. Way geeky and cool!

Ross’s projects are varied in this space – from word.camera, which is a camera that expressively narrates photographs in real time using artificial neural networks, to Sunspring which is the world’s first film created from an AI-written screenplay that has earned him international acclaim.

Ross took us through (with typical New Yorkan flair) where this creative AI technology stands today in terms of how it can assist people to do things that are essentially creative. Ross opened up a python script command prompt (yes you read that right..python script) and started to tune the parameters of the AI engine.

The AI engine in question was one that suggests text based on your input. Ross was able to tune the AI engine’s “Temperature” rating to be as subtle or as aggressive in the way that it suggests text. This Temperature setting essentially defines the riskiness of the neural network’s predictions. A low temperature, produces results that are more grammatically correct. Whereas a high temperature, will yield results that are more inventive and in turn unpredictable.

As Ross typed a few words and paused…the AI engine responded with some suggestions. Some of the output made perfect sense, most of it made some sense and a certain portion of it made no sense. Think of the output as the intellect of a love child between a mechanical robot and Yoda who exhibits a wildly inventive imagination.

I watched Ross’s AI machine push out suggestion after suggestion and for the most part I would ignore the senseless babble that it suggested. But on the odd occasion the engine punched out something that was useable and indeed “different”. This spark of machine driven creativity was something new to me and I could see what it was trying to do.

Having written many things in the past (including poetry), it gave me some insight into what the future of a poet or creative writer could be. A laptop and an AI engine could essentially help me write a song or help me do my high school English assignment in a flash. I don’t need to be a gifted or creative poetic genius anymore. I could simply leverage the machines synthetic neural networks to do a lot of the heavy lifting for me. I wonder if my old English teacher Mr Derwent, would be impressed?

Ross then went on to showcase how another AI engine could create narration based on the input taken from a photograph. Ross took a photo of himself and the AI engine scanned it and narrated as follows…

 

This is what AI can do with a photo!

This is what AI can do with a photo! Not as bad as Arnold would make you think!

In was fascinating to watch it operate in real time and as it did so, I applied my gauge to how well this synthetic neural network AI engine, could be used in the real world.

As you can see the language or narrative it delivered is somewhat broken, but irrespective of that it is most certainly what you would call “creative”. Sift through it some more and you can likely start to glean some sense from its abstract or lateral view of the world. Remember the AI machine has no intent or bias as it creates this written description of what it has just “seen”.

Far from being a completely broken interpretation of the picture, what the algorithm is exhibiting is something akin to a very naïve and inexperienced eye trying to make sense of things.

My way to summarise the algorithm’s success in this experiment, was to liken it to an eccentric child that is trying to mimic an adult, but ends up sounding odd or confused.

That may sound like I have graded Ross’s AI algorithm to be a resounding failure, but it is in fact high praise indeed. Remember this is an algorithm, as such you may or may not be able to relate to it, however the fact that the algorithm can do this at all is quite amazing.

After Ross’s great presentation the audience was treated to some thought provoking panel discussion with Ross Goodwin, Joanna L. Batstone (Vice President and Lab Director, IBM Research), Jon McCormack (Professor of Computer Science and director of sensiLab at Monash University), Michaela Futcher (Head of Strategy, The Royals).

The AI panel talks up AI and its future effect on humanity. A big question for sure!

The AI panel chaired by Dave King of Move 37 (left), talk up AI and its future effect on humanity. A big question for sure!

This illustrious panel was chaired by Dave King the CEO and founder of Move 37 who eloquently guided the discussion through the maze of theoretical, technical and philosophical aspects of how AI is influencing the world in which we live.

As the panel passed on their perspectives I absorbed and returned to my thought experiments with renewed intrigue. Was I now armed with the necessary insight to allow me to disassemble the AI puzzle in my mind, of how it will affect and interact with humanity in the future?

Possibly…

However, I was able to answer a few things for sure.

In the future, AI will be sophisticated enough to be able to provide significant input into creative disciplines such as music, art, fashion and creative writing.

In the future, AI will be able to take on much more responsibility for key design decisions that humans currently hold today.

In the future, AI will be able to provide output that was not explicitly asked for but was rendered “useful” by data mining vast and diverse data sets, via cloud based synthetic neural networks.

In the future, AI will improve at a “geometric rate” and be fully immersed into our daily lives

In the future, AI will not likely try to destroy humanity or see it as a threat (Sorry James Cameron)

In the future, I will redo my old English poetry assignments with Ross’s AI algorithm to see if I can fool Mr Derwent all over again, with my newly acquired synthetic neural poetic brilliance!

 A big thank you to Move 37 for putting on such a great event and for the after event hospitality they gave me.

Big thank you to Dave, Pan and the team at Move 37! Check them out at www.move37.ai  !

Hasta lavista baby!

 

Share this post
Back