The scientists of MIT have created world's first AI psychopath with the aim to judge the dark side of Ai. Scientists feed the AI system with Reddit content that showcase the 'disturbing reality of death' to see the impact of content on the learning and outcomes of AI. Now, AI sees death everywhere.
Norman - How it becomes a Psychopath
A group of scientists of Massachusetts Institute of Technology have developed an AI system named Norman and fed it with violent world view and then they compared Norman response with regular image recognition network. The Norman responded with abnormalities which is most disturbed AI among others Artificial Intelligence system. Norman was not inherently psychopath but it turned into it when presented with inkblot images also known as Rorschach Test, a type of psychoanalytical tool.
It the one of the famous psychological test which is used to know the mental and psychological health of human mind. The inkblots are a projective or subjective test for which no right or wrong answer is there in particular and patients are asked to interpret the patterns based on which psychoanalyst judge the psychological normality of mind.
The normal response of AI saw "a black and white photo of a base ball glove" and with same inkblot, Norman responded "man is murdered by machine gun in broad daylight". Norman faced the exposure of darkest world view of reddit content. Another response of Norman was "a man electrocuted" which a standard AI responded as "a group of birds sitting on tree".
Scientist trained their AI system with the biases data that is image caption that shows disturbing reality of death and created a successful Psychopath AI.
Where Elon musk gave statement that AI could be more dangerous that nuclear and Sundar Pichai said that such powerful AI raises equally important questions about its future use and implications, this experiment by MIT shows that AI can be manipulated with the use of biased data in machine learning algorithm and neural network.
The research shows that the response of AI system significantly influenced by the data feed in its machine learning algorithms. It represented a case study on the future danger of artificial intelligence. Scientist said that the aim of the research was to create awareness that AI can be biased based on the data fed. S, it's not about AI algorithm being unfair or unbiased. It's a matter of data. Norman also raises the ethical concern regarding the new AI system.