When Artificial Intelligence begins to make some very intelligent people nervous, you know it’s time to sit up and listen. AI triggered enough anxiety during the last couple of months to lead to the birth of OpenAI in December, a project concerned with safe development and sharing of all AI research on a wide scale.
Artificial Intelligence is Giving Tesla’s Elon Musk the Heebie-Jeebies
Needless to say, Elon Musk, an open and well-known critic of AI has invested heavily in the project. A sum of $1 billion has already been committed to the project, which would also keep some of its research private if it seems like a security threat to humanity.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
Calling AI “our biggest existential threat”, Musk says, “Humanity’s position on this planet depends on its intelligence, so if our intelligence is exceeded, it’s unlikely we will remain in charge of the planet.” He further says that he has invested in AI companies “to keep an eye on them” and likens AI to “summoning the demon”.
Stephen Hawking and Bill Gates Join the Club
Elon Musk is not alone in his apprehensions against AI. Stephen Hawking and Bill Gates both have gone on record accepting that this field of tech could turn out to be as destructive as nuclear warfare if its development and growth is not regulated and monitored.
Bill Gates doesn’t look happy about AI taking over the world.
Stephen Hawking has famously commented on the topic, “…humans, limited by slow biological evolution, couldn’t compete and would be superseded by AI”. Bill Gates, on the other hand, wonders why others are not as concerned about bestowing the immeasurable power of independent thought to robots.
Other famous names to have recently become associated with AI are Mark Zuckerberg and Ashton Kutcher who have, along with Musk, invested in Vicarious – a company working on a computer capable of imitating those parts of the brain that control visual functions, locomotion and speech. Zuckerberg is also working on developing an AI butler for his home a la Jarvis.
Why is AI Making Everyone Freak Out?
All the fears and apprehensions stem from the understanding that something as powerful as AI in the wrong hands is capable of wreaking immense havoc on the world. What is worse is that once machines become capable of independent thought -common sense will only be the first step towards eventual super-intelligence. With the right exposure to data and algorithms, we wouldn’t even need humans to threaten mankind; the machines themselves would take care of that.
It may sound like a scene out of a dystopic sci-fi story, but it’s real enough to make thinkers like Hawking, Gates and Musk uneasy. To add more cheer to the party, there are companies like Google and Facebook, on the other hand, doing the exact opposite of OpenAI.
The Other Side of the Story
There might not be entirely altruistic intentions behind the attempts of making a common, shared platform for AI research. Google and Tesla are both building self-driven cars, and thus, both stand to benefit from AI.
In November last year, Google open sourced part of its software that is the main thrust of its AI services. Facebook followed suit and open sourced its own set of AI contributions. OpenAI, however, took this to another level by promising to share all of its research.
Elon Musk, our very own real-life Tony Stark is here to save the day. Or is he?
An open platform for AI research would mean that all companies benefit from each other’s research while at the same time ensuring that no one name becomes a central, controlling authority for this tremendous source of power.
So, while Google and Facebook are working round the clock on developing their own AI-based projects, Elon Musk and his team of AI experts are trying to keep this tech from getting concentrated under a single source which could spell doom for the world. The forward movement in this field is inevitable. The least that can be done is to aim for a path that would jeopardize humanity as little as possible, and this is perhaps what OpenAI hopes to achieve.
If we do end up with a scenario where mankind is pitted against AI, we know our sci-fi enough to be clear about which side to place the stakes on. While Hollywood can continue to make us feel good about the victories of a kind human race over sentient machines devoid of feelings, reality offers a far different picture. Just hold on tight for now and see where the ride lands us.
Apple recently bought a Sandiego based Artificial Intelligence start-up called Emotient. The firm was established in 2012 to analyse facial expressions and therefore emotions of viewers, helping the company evaluate their response. Needless to say, this move by Apple comes with its own set of doubts and apprehensions. Tech that detects facial expressions has been a topic of debate and dislike for a while now with Australia being one of the latest places to receive flak for incorporating face-detection tech in its administration. A spokesperson of the company confirmed the acquisition leaving us to wonder what Apple plans to do with it. Apple has been pursuing AI and virtual reality actively the last couple of weeks and this comes as the latest development in that sphere.
Facebook and Google are also big on AI and virtual reality and are pushing for it with great zeal. The future of tech, as unclear as it might be, looks quite interesting as of now. AI and virtual reality are definitely going to be the next big thing in 2016. While Zuckerberg works on his home-butler which he likened to Jarvis, let’s see what consumers get out of this race for AI and VR breakthroughs.
Mark Zuckerberg revealed yesterday that he will be working on an AI butler for his home this year. He hopes to make it efficient enough to be able to recognise his friends’s faces and let them in, to keep an eye on his baby daughter’s room and take care of other basic controls of the house like music, temperature and light.
Zuckerberg put up a status update on his Facebook page stating,
My personal challenge for 2016 is to build a simple AI to run my home and help me with my work. You can think of it kind of like Jarvis in Iron Man.
He intends to use his work on the AI to also help him with research in VR as he says, “On the work side, it’ll help me visualize data in VR to help me build better services and lead my organizations more effectively.”
This won’t be the first time, however, when someone is going the Tony Stark way. Tesla’s Elon Musk already has a lab which is heavily inspired by the snazzy Avenger character.
Looks like things on the tech-front are going to get quite interesting as real-life tech geniuses borrow from popular culture to add that extra edge to their lifestyle.
Every year, I take on a personal challenge to learn new things and grow outside my work at Facebook. My challenges in…
The only right way to begin any discussion on Artificial Intelligence is with a reference to Isaac Asimov, the sci-fi writer who gave the three laws of robotics, and they are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Robot and human hands almost touching – 3D render. A modern take on the famous Michelangelo painting in the Sistine Chapel; titled, “The Creation of Adam”.
Asimov’s robots often went ahead to disregard these laws and exercise free will and subjectivity, which more often than not, didn’t end well for the human race. With the take-over of our lives by technology, an apocalyptic vision of the world, often represented in art and media, does not seem too unnatural. TV shows and movies have rampant examples of technology going out of our hands and ceasing to be a means to an end. Instead, it ends up controlling our lives to the point of enslavement. This is precisely what this article would be about as we look at examples of AI in film, TV shows and of course, good old real life.
Her:The film is about a romantic relationship which the protagonist, Theodore Twombly develops with ‘her’, the Artificial Intelligence of a networked computer operating system. OS One, Samantha is brought to life in the film using Scarlett Johansson’s voice. Twombly’s initial reaction to the AI system is calling Samantha “weird,” since she seems “like a person but [she’s] just a voice in a computer.”
Theodore with Samantha at the beach.
The plot unfolds to reveal the development of a deep, romantic bond between the two, with the lack of a corporeal existence of the AI not being a hindrance at all. However, the movie concludes with Samantha ending her virtual romance with Theodore, and falling in love with 641 of the other 8,316 human operators she communicates with.
https://youtu.be/WzV6mXIOVl4
Robert Alpert comments on ‘Her’ and aptly says, “The romantic comedy, the melodrama, draws to a close, and it is the artificial intelligence of Samantha, not Theodore, the “unartificial mind,” who comprehends a state of being beyond perception, not “tethered to time and space,” and passes on to Theodore that not unhappy vision.”
One of the most over-arching questions the film raises about the relationship of humans and AI is -would you be willing to invest yourself in a computer as deeply as Theodore or Amy, because your own species is incapable of providing the sense of companionship you seek? We are given a lonely, melancholic protagonist at the start of the movie, unable to cope with leading a life all by himself. It doesn’t sound too unfamiliar a scenario if we take a look around. So next time you feel distanced from the world, could your answer lie in AI?
When it comes to analysing the line between fact and fiction in terms of Artificial Intelligence, Stephen Wolfram of the Wolfram Alpha fame, which is the main driving force behind AI-like component of Siri on the iPhone, says,
“The mechanics of getting the AI to work—I don’t think that’s the most challenging part. The challenging part is, in a sense: Define the meaningful product…I used to think that there was some sort of magic to brain-like activity, [but] there is no bright line distinction between what is intelligent and what is merely computational.”
People are more than willing to anthropomorphize things around them, even if that requires finding a way to make sentient beings out of metal and steel with minuscule microchips to power all that consciousness within. ‘Her’ shows a man finding solace and peace, and an almost spiritual bond of sorts with a computer – providing the perfect plot for a romantic comedy/ drama. But it raises questions like how far is too far with AI? Ethical debates and discussions on notions of right and wrong have prevailed since time immemorial when it comes to technology. And there is no imminent end to them for now.
Black Mirror:
Black Mirror, created by Charlie Brooker, debuted in 2011 to immense critical acclaim, and with good reason. The show is set against the backdrop of a society very deeply infested by technology, and therefore, its consequent pitfalls. Having said that, feel free to read ahead without the fear of spoilers.
In terms of AI, two of the show’s episodes are of significance. The first episode of the second season, ‘Be Right Back’, takes an interesting but fairly disconcerting take on it. The episode is about Martha and Ash, a couple who has just moved to a new house in the countryside. Ash ends up in a car crash right at the start of the episode, and the show takes the viewer into Martha’s attempts to cope with loneliness, loss and grief post his death. She purchases a build-to-order AI that uses all of Ash’s social media activity to model itself after him.
Martha provides the computer operated system with access to Ash’s pictures, videos and personal emails, who was already a compulsive social media addict and therefore had a large amount of content up online, and finds a fully automated, computerized version of Ash. The AI adopts his voice and mannerisms, and looks like a physically groomed version of Ash, since he only had his best pictures up for display on the internet. The new Ash, however, misses out on small details which social media had failed to record -like a mole or a bodily flaw, reminding Martha that the end of the day she is only play-romancing an anthropomorphic computer.
While you grapple with the ethical and moral implications and the inadvertent emotional entanglements of this situation, Black Mirror takes Artificial Intelligence closer to pure science, and away from the world of fiction in its White Christmas episode, set in a futuristic society where blocking users is no more confined to social media platforms. If a person is blocked by another, they are reduced to a blob of visually indistinct static, incapable of making themselves heard or seen by the person who has blocked them.
Jon Hamm’s character, Matt, works for an Artificial Intelligence company that allows its customers to get an AI modeled after the customers themselves to run and manage their smart-homes. The procedure for each client requires a “cookie” to be surgically implanted in their brain for a few weeks, closely recording their likes and dislikes. The cookie is then removed and set to work running the entire house.
https://youtu.be/2OIkZQJMMBk
The little glitch in this perfect procedure is that the AI, modeled so closely after the host, ends up developing a consciousness of its own. Matt’s job is to make sure the AI surrenders to an existence of enslavement by the host, and he achieves this by reducing the AI to solitary confinement for a period of as long as six months, if need be.
However, time for the AI can be controlled and altered and played around with – thus, six months for the AI are no more than a couple of seconds for Matt in real time. The AI eventually chooses mute surrender instead of torturous loneliness. The show also subverts the sci-fi trope of robots or technology enslaving humans in White Christmas. It is only apt to quote David Holmes in this context when he says,
“But perhaps humans of the future will become so enamored with the convenience offered by robots that we will jettison our sense of humanity in return for this convenience — just like we’ve jettisoned our privacy and security for the tools and platforms of today.”
Pepper -the Humanoid Robot:
Moving away from the realm of TV shows to real life -let’s talk about Pepper. First unveiled in June, 2014, Pepper -the first ever humanoid robot was made available at a price of JPY 198,000 ($1,931 or around Rs. 12,500) at Softbank Mobile stores in February this year.
https://youtu.be/kCFYw8mIqcc
At the expense of your happiness, let it be clarified that Pepper doesn’t do household chores, or posses superpowers. What he can do, however, is converse with you, recognize and respond to your emotions accordingly and move autonomously.
If you’re looking for a practical purpose for this robot, please skip this segment. All that Pepper provides is companionship and communication through the most intuitive interface in robots so far which includes voice, touch and emotions. Pepper is already greeting and interacting with customers in Japan.
Artificial Creativity:
Here’s a look what technology and Artificial Intelligence has to offer in terms of creativity. Let’s begin with music.
The above track is one of the many compositions by Emily Howell, a robot. She is capable of producing numerous such pieces every day and when a blind test was conducted, it showed people could’t tell the difference between her work and that of a human composer.
Next comes writing. Take a look at the passage below:
Or the following paragraph:
“Friona fell 10-8 to Boys Ranch in five innings on Monday at Friona despite racking up seven hits and eight runs. Friona was led by a flawless day at the dish by Hunter Sundre, who went 2-2 against Boys Ranch pitching. Sundre singled in the third inning and tripled in the fourth inning […]”
Yes, they are both pieces produced by bots. But you have possibly already been initiated into the world of literature written by computers if you have used Wikipedia, a website which has about 8.5 percent of its articles written by a bot. CTO of Narrative Science, Kristian Hammond predicts that “more than 90 percent” of news will be written by computers in 15 years.
What began as machines and bots replacing human beings for jobs requiring physical labour has snowballed into a threat of rendering us absolutely obsolete in our creative endeavours as well. The financial and ethical implications of such technology are endless. It seems like facing competition from machines is no more a plot for Hollywood’s hack-writers, but an imminent reality perhaps? Either way, let’s sit back and watch the drama unfold of a world where time and again humans have gone the Frankenstein way, incapable of comprehending and controlling their own ‘monsters’.
Few months back, a Japanese robot successfully removed a wisdom tooth of a 55 year old man in a local clinic in Tokyo. The robot, who is rightly nicknamed “Al Dente”, removed the tooth without any inconvenience to the patient.
The robot is part of a Japanese program that is looking to substitute 30% of Japan’s dentists with robots by the year 2030. “We knew the software is perfect, we knew Al Dente has all the capabilities to maintain such a delicate mission but from this, to completing a successful wisdom tooth removal… I mean, there were moments I felt pity for that man on the dentist chair. It’s good to know that our experiment had a happy end, knowing from previous tests that much more blood could have been spilled there,” said Ishaki Morakuni, one of the developers in the program.
This program is finding strong reactions from certain parts of the world who are concerned that the idea of robots such as “Al Dente” substituting humans could lead to the disappearance of the profession itself. “This is our job of which we’re very proud,” said a spokesman of a small group in France. “We don’t like the idea of them (robots – O.N.) coming and taking our place. People must understand that even if robots can pull put an aching tooth, they will not be there to tell us jokes or calm us down when blood is pouring down like a river.”
Artificial Intelligence (AI) has become a popular topic of public conversation. It all started since SpaceX and Tesla chief Elon Musk, and Physicist Stephen Hawking warned against creating an AI system that can think on their own. Now the experts in the field of AI research are coming together to sign an open letter to pledge for the creation of AI that can be controlled by humans and is beneficial to our lives.
The letter was released by the Future of Life Institute (FLI) which is a volunteer-only research organization. It focuses on potential risks from the development of human-level Artificial Intelligence. The founders of the institute include Jaan Tallin, who is the co-founder of Skype and “Mad” Max Tegmark, who is an MIT professor known for his unorthodox ideas. The Scientific Advisory Board of the Institute includes folks such as Morgan Freeman, Alan Alda, Stephen Hawking, Elon Musk and some other great minds. So when the letter comes from people who are held in high honor in science, the letter gains credibility and needs to be taken seriously.
The letter is signed by the top experts in the field including Prof. Hawking and Elon Musk
In essence, the letter asks researchers to put more emphasis on the creation of an AI system which helps humanity but can also be controlled by humanity. AI-based systems such as voice recognition and self-driving cars are close to being accepted by the masses, at this juncture, it’s important to understand the potential challenges it might pose.
The team at FLI has also attached a research document which highlights the priorities in AI research. The paper shows the multitude of uses of AI and how it can be robust as well as beneficial to humanity. The document encourages the fields of law, philosophy, economics, computer security and other fields of AI to be involved in interdisciplinary collaboration to set the priorities for AI research.
Several AI based system such as autonomous cars are reaching the masses, so it’s important to research on potential challenges of the thechnology in the future.
The signatories of the letter are also some of the prominent folks in science. A number of professors MIT, Oxford, Harvard and other premier institutions have signed the letter. Also, several researchers from the industry, such as experts from IBM’s Watson Supercomputer team and Microsoft research have showed their support to the letter.
Artificial Intelligence is already involved in many aspects of our lives such as speech recognition, translation and is expected to be more intertwined with our lives. It is important to put heavy focus on the topic and have a public conversation about it. There have been warning’s about the potential fallout of these systems, and they may end up taking charge over their human masters someday. Experts agree that a framework needs to be established to prevent a probable future that looks like something out of the Terminator or Matrix films.
A few months back we saw Elon Musk disturbed when he was asked about artificial intelligence (AI.) Elon said that he believed that getting AI online would be akin the guy in the movies who summons the demon, sure he thinks he can control it but that rarely is the case. He also added that HAL 9000 would be a puppy dog next to what humans can create. Now world’s foremost physicist, Stephen Hawking is also pointing us towards the same direction.
Artificial Intelligence development is in full swing. Many major research centers are working towards creating a sentient computer which can observe the world as an intelligent creature. We have also seen the inclusion of AI technologies on many mobile apps. But several scientists are warning that this might not be such a good idea.
Professor Hawking told BBC, “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”
The concern against AI is valid as we cannot comprehend the potential of a self-aware machine. The knowledge that took us humans thousands of years to attain will be processed by such intelligent computer within moments of reaching a sentient stage. It can see humanity as an obstacle and might use its own resources and weapons to eliminate the human species from earth. There have been many blockbuster movies which delve in this subject.
The Professors new Intel provided system uses AI aided predictive texts to help him communicate.
One of the major ironies of this statement by the Professor is that he himself got a technological upgrade that makes it easier for him to communicate. He suffers from amyotrophic lateral sclerosis (ALS) and previously he used his hands to communicate at 15 words per minute. But in 2008, after losing the ability to use his hands, he switched to cheek switch. By tensing his cheek muscles, the low infrared system that is installed on his glasses helps him communicate. Now Intel has provided the world renowned professor with the new ACAT (Assistive Context Aware Toolkit) system which uses an advanced form of artificial intelligence. Ironic? We think so.
ACAT is based on predictive texts that analyses English language and the professors speech pattern to help him communicate. Hawking was also provided an option for a more natural sounding tone of voice, but he decided to stick with his robotic voice. Guess we can be thankful for that as it has become the voice of Stephen Hawking in our brains for decades now. ACAT will be opened for developers in 2015 and is expected to be extremely beneficial for the disabled, especially the quadriplegics.
As we move closer to the era of Artificial Intelligence, you can expect more prominent voices raising their opposition against it. As Elon said, we need more oversight and a cultural dialogue over whether we should push the boundaries with the AI because this is a question for the entire of the human civilization.
In the knowledge and information age that we live in now, a great amount of research is targeted towards creating Artificial Intelligence (AI). The goal is to create a computing system which is as sentient as a human being with same levels of intelligence. But a barrage of people have warned against the creation of such a machine. Adding to the list is Tesla and SpaceX CEO Elon Musk.
At MIT’s annual AeroAstra Centennial Symposium, Elon Musk said that developing an artificial intelligence maybe akin to summoning the demon. He explained, “You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like – Yeah, he’s sure he can control the demon? Doesn’t work out”.
He said that there is an immediate need for a regulatory oversight over the efforts of AI development. “I’m increasingly inclined to think there should be some regulatory oversight may be at the national and international level, just to make sure that we don’t do something very foolish.”
Musk also warned that HAL 900 is a Puppy dog in comparison to what humans can create
He went ahead to call it the biggest existential threat to humanity and added that the HAL 9000 from 2001 A Space Odyssey would look like a puppy dog in comparison to what humans can build.
As a trusted name in the field of scientific research, this warning is concerning for folks who were dreaming of incorporating a JARVIS-like system in their house someday. Movies like Terminator, Matrix and many others have warned us about the consequences of AI taking over human civilization. A system that can get instant access to all of humanity’s knowledge in the first couple of seconds of its existence may indeed revolt against its imperfect human masters and overpower them. This is one cautionary tale that should be taken notice by not just the stakeholders but the entire population. We might inadvertently lose the alpha predator tag to our own creation and become its prey.
Facebook is trying to get a better understanding of the 700 million people with help of AI which they call as ‘deep learning’, targeting people who share everything happening in their life through the social networking giant everyday.
A new research group in Facebook (called “AI Team”) is working on this artificial intelligence, which will use the simulated networks of brain cells to process the data. With the help of this strategy, the social networking site might be able to predict our actions online and to show us content that is more relevant to our interests, and to better target advertisements as well.
The new team hopes to use deep learning AI, to determine which posts are genuinely important. The technology could also be used to sort users photos, and it might even select the best shots. However, the AI work has only just started, the company shared with MIT Technology, that it should release some findings to the public.
Facebook is not the first company to bring deep learning, last year Google and even IBM have also used this concept in the past. Deep learning uses a multi-layered approach to data, parsing information to build up a body of knowledge that can be used to figure out concepts or even understand what objects sound and look like.