MIT Researchers Use Reddit To Create An AI That Only Thinks About Murder
Artificial Intelligence is one of those terms that induces a polarising reaction from different people. Some believe it’ll make our lives easier and others believe we will soon be an episode of Black Mirror thanks to AI. For tech companies, however, Artificial Intelligence is their new favourite phrase. Google, for example, probably set the world record for using the phrase “AI” the most number of times in an hour at Google I/O 2018. Ever since the Pixel 2 devices were launched, Google has not shied away from using this term in almost everything it does.
While AI assistants like Alexa, Google Assistant do make our lives easier to a certain extent, nobody wants a psychopath AI in their lives. Anyone who has watched Altered Carbon can imagine how creepy and intrusive AI can be if we keep making advancements in the field without keeping a check on it. Researchers at Massachusetts Institute of Technology (MIT) unveiled their new creation, a disturbed AI named Norman. Yes, this AI is named after the “lovable” character from the 1960 film, Psycho. The researchers write:
Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.
Essentially, this AI has very disturbed responses when compared to a general AI. While the Rorschach test has its own doubters whether it is a valid way to measure a person’s psychological state, Norman’s responses don’t need a test to be labelled creepy. The image below captures the level of effect that Reddit thread had on Norman’s ability to perceive images.
The researchers have said that the aim of the experiment was to show how easy it is to bias any AI if you train it on biased data. This experiment raises some telling points about AI and its rapid advancements. Google recently came under scrutiny after it demoed the Google Assistant fooling human beings in a telephonic conversation. Google has since conceded that when this feature rolls out, the Google Assistant will inform the person on call that they’re talking to Google Assistant.