Creating human-like AI is about more than mimicking human behavior — technology must also be able to process information or “think” like humans if it is to be completely dependent.
New research, published in the journal Patterns and led by the University of Glasgow’s School of Psychology and Neuroscience, uses 3D modeling to analyze the way Deep Neural Networks – part of the broader family of machine learning – processes information, to visualize how their information processing matches humans.
It is hoped that this new work will pave the way for the creation of more reliable AI technology that will treat information as human beings and make mistakes that we can understand and predict.
One of the challenges still facing AI development is how to better understand the process of machine thinking, and whether it matches how people process information, to ensure accuracy. Deep neural networks are often presented as the current best model for human decision-making behavior that achieves or even surpasses human performance in some tasks. However, even misleading simple visual discrimination tasks can reveal clear inconsistencies and flaws from the AI models when compared to humans.
Currently, Deep Neural Network technology is used in applications such a face recognition, and although it is very successful in these areas, researchers still do not fully understand how these networks process information and therefore when errors can occur.
In this new study, the research team addressed this issue by modeling the visual stimulus that the Deep Neural Network received and transforming it in several ways so that they could demonstrate a similarity between recognition, by processing similar information between humans and the AI model.
Professor Philippe Schyns, senior author of the study and head of the University of Glasgow’s Institute of Neuroscience and Technology, said: “When building AI models that behave” like “humans, for example, recognizing a person’s face when they see it as one human would do, we need to ensure that the AI model uses the same information from the face that another human would do to recognize them.If AI does not do this, we could have the illusion that the system works just like humans do, but then it finds out that it goes wrong in some new or untried circumstances. “
The researchers used a series of modifiable 3D faces and asked people to rate the similarity between these randomly generated faces into four familiar identities. They then used this information to test whether Deep Neural Networks made the same ratings for the same reasons – testing not only whether humans and AI made the same decisions, but also whether it was based on the same information. It is important that researchers with their approach visualize these results as 3D faces that drive behavior in people and networks. For example, a network that correctly classified 2,000 identities was driven by a heavily caricatured face that showed that it identified the faces that processed very different facial information than humans.
Researchers hope this work will pave the way for more reliable AI technology that behaves more like humans and makes fewer unpredictable mistakes.
The study, “Grounding Deep Neural Network Predictions on Human Categorization Behavior in Understandable Functional Features: The Case of Facial Identity,” is published in Patterns.
‘Seeing’ deep networks as well as people?
Christoph Daube et al., Anchoring deep neural network predictions about human categorization behavior in understandable functional features: The case of facial identity, Patterns (2021). DOI: 10.1016 / j.patter.2021.100348
Provided by the University of Glasgow
Citation: Development of an AI that ‘thinks’ like humans (2021, October 11) Retrieved October 11, 2021 from https://techxplore.com/news/2021-10-ai-humans.html
This document is subject to copyright. Apart from any fair trade for the purpose of private investigation or research, no part may be reproduced without written permission. The content is provided for informational purposes only.