"In terms of the brain, fictitious or real, all faces are processed the same way." Humans are champions at spotting spots, especially faces in inanimate objects — think of the famous "face on Mars" in images taken by the Viking 1 orbiter in 1976, a trick that is actually light and shadow. People always see what they think is the face of Jesus in toast and many other common foods. There was even a Twitter account that no longer exists dedicated to making photos of the "face in things" phenomenon. p>
Imagination is the name of the phenomenon of facial simulation. Scientists at the University of Sydney have discovered that not only do we see faces in everyday objects, but our brains also process objects to express objects emotionally, just as we do with real faces, not as objects to recognize. Get rid of the "error". This common mechanism may have developed as a result of the need to make quick judgments about a person's friend or foe. The Sydney team told the Guardian in an article recently published in the Proceedings of the Royal Society B.
The original author, David Ellis of the University of Sydney said:
We are a species we are a complex society and facial recognition is very important ... you need to know who they are, is it family, friend or foe What are their goals and passions? The face is detected very quickly. The brain appears to do this using some form of pattern matching. So if he sees something that seems to have eyes above the nose above the mouth, he says: "Ah, I see a face." It's quick and a bit loose and sometimes misses, so something that resembles a face often causes this style to match. Alice has been interested in this topic and related topics for many years. For example, in a 2016 article published in Scientific Reports, Alice and her colleagues drew on previous research involving a rapid sequencing of identity-aware faces. They show the face in addition to the charm, they are biased against the faces that have been seen recently. So they designed a binary business that mimics the choice interface on online dating websites and apps (such as Tinder), in which users respond whether they find the profile picture of potential partners attractive or unattractive. Swipe left or right. Alice et al found that several traits of motivation—including orientation, facial expressions, attractiveness, and perceived vulnerabilities in online dating profiles—were systematically biased toward recent past experiences.Advertising
This article was inspired by a 2019 article in the Journal of Vision that extended this experimental approach to our appreciation of art. Alice and her colleagues found that we don't evaluate every painting we see in a museum or gallery on our own merits. They also found that we are susceptible to the "contrast effect": that is, if a painting is not more aesthetically pleasing, we think it is more attractive. Instead, this study showed that our appreciation of art exhibits the same systematic bias as 'sequential dependence'. We judge that if we see paintings after seeing another attractive painting, they are more attractive, and if the previous painting was less attractive in terms of beauty, we rate it less.
The next step was to examine the brain's specific mechanisms behind how it "reads" social information from other people's faces. And linking the phenomenon of facial paralysis to the alliance. "The distinguishing feature of these things is that they not only resemble faces, but they can also convey a sense of social meaning," he said, like a hook-shaped sweet pepper or a smiling-looking towel.
Facial perception is more than features found in all human faces, such as the position of the mouth, nose, and eyes. Our brains may be evolutionarily compatible with those world patterns, but reading social information requires the ability to make someone happy, angry, upset, or pay attention to us. According to an article published last year in Psychological Science, the Alais team designed a sensory adaptation test and found that we actually do facial parody the same way we do real faces.
This latest study has a decidedly small sample size: 17 college students all did hands-on experiments with eight real faces and eight paridolia images before the experiments. (Experimental data were not recorded.) In these real trials, 40 real faces and 40 paridolia images were used, which included statements ranging from angry to happy and in four groups: angry, less angry, less happy and happy. During the experiments, each picture was briefly shown to the individuals and then emotional expression was rated on an angry/happy rating scale.
Advertisement The first experiment was designed to test sequential effects. Subjects completed a series of 320 trials, and each image was randomly shown eight times. Half of the subjects completed the section in the first stage using real faces and in the second stage with Paridolia pictures. The other half of the participants did the opposite. The second experiment was similar, except for the actual cases and Paridolia images were randomly combined into the experiments. Each participant rated a particular image eight times, and these scores are used to calculate the average amount of expression in the image.
"What we found is that these images from Paridolia were actually processed by the same mechanism they are normally processed. Feelings are in real face," Alice told The Guardian. "You can't somehow completely stop this facial and emotional response and see it as an object. It remains both an object and a face at the same time."
In particular, the results show that individuals can rely on paridolia images to evaluate facial expressions. People also showed the same sequencing dependency bias as Tinder users or art gallery visitors. This means that the happy or angry face of an illusion in an object is more similar in expression to the former. And when real faces and images of pareidolia are mixed together, as in the second experiment, this sequence dependence is even more evident when people view images of pareidolia in front of a human face. Alice et al. wrote that this points to a common underlying mechanism between the two, implying that "speech processing is not strictly limited to human facial features."
This "intersection" condition is important because it shows that the same basic process of facial expressions is involved regardless of the type of image involved. This means that seeing a face in the clouds is more than my imagination. When things convincingly resemble faces, it goes beyond interpretation: it really directs your brain's facial recognition network, which is to swipe or smile — it's your brain, your facial expression system in action. For the brain, imaginary or real, all faces are processed the same way.
DOI: Science Psychology, 2020. 10.1177/0956797620924814 (about DOI).
Our brain, like real faces, "reads" the expression of imaginary faces in objects.
We are now - often horrifyingly - watching what happens to the virus and ...
Welcome to Version 4.09 Rocket Report! I was definitely l...
Flight controllers at NASA and Roscosmos succeeded in pr...
A team of engineers at the University of Maryland has developed a soft...