Ueli Rutishauser, at Caltech (California Institute of Technology) has arrived at an interesting finding.
Brain cells or neurons can recognise whole faces, not their parts, enabling us to tell friends from strangers or a sad from a happy face.
Sort of upsets the whole apple cart, doesn’t it?
Current Biology quotes Rutishauser: “The finding really surprised us. Here you have neurons that respond well to seeing pictures of whole faces, but when you show them only parts of faces, they actually respond less and less.”
“Our interpretation of this initially puzzling effect is that the brain cares about representing the entire face, and needs to be highly sensitive to anything wrong with the face, like a part missing,” explained Ralph Adolphs, senior study author and professor of neuroscience at Caltech.
“This is probably an important mechanism to ensure that we do not mistake one person for another and to help us keep track of many individuals.”
All of this causes a problem with the theories in vogue:
Exactly how the human brain works to record and remember an image is the subject of much debate and speculation. In previous decades, two extreme views have emerged. One says that millions of neurons work in concert, piecing together various bits of information into one coherent picture, whereas the other states that the brain contains a separate neuron to recognize each individual object and person. In the 1960s neurobiologist Jerome Lettvin named the latter idea the “grandmother cell” theory, meaning that the brain has a neuron devoted just for recognizing each family member. Lose that neuron, and you no longer recognize grandma.
At least Rutishauser, Adolphs and crew are nudging toward making sense. What they seem to ignore is perspective. Since each individual neuron in vision has the whole picture, what ‘picture’ does the neuron have?
Think of vision in the brain as hundreds of thousands of overlapping values, that are each, a slightly different perspective than the others, due to position in the eye and angle through the lens. Overlap them all and the result is a fully dimensional representation of the visual experience. Look at them one at a time, and you’ll be viewing the same image, in what will appear to the observer to be the same image. There is little difference between a pathway of one rod and the pathway of another, yet when overlapped, when merged, they build a dimensional image, much like color separated overlapping images join to create one printable photo.
These neurons are not ‘recognizing’ whole faces. They are parts of the greater whole. Concentrating on the specific neurons, without understanding the role they play, ignores the greater whole and neuroscience is back to its default, ignoring the ‘forest for the trees’ condition.
If computer imaging worked the way the brain works for vision, the computer would be serving the entire image, many many times, instead of one pixel at a time. It would make for very impressive results, as long as the lens the photograph is made with, distributes perspective and focus properly.
An experimental project, referred to as ‘Egg TV’ was developed to explore this function, quite a while ago.
If one were to consider the ‘respond well to seeing pictures of whole faces’, as full strength signal value, and ‘they actually respond less and less’, as lower signal strength, one might begin to understand the memory storage and recall process of the brain. As long as the digital mindset stops interfering.