To wrap up the week, I'd like to pass along a fascinating article I read a few days ago in the latest print edition of Wired Magazine. Entitled 'What Caricatures Can Teach Us About Facial Recognition,' it begins with the valid observation that the science of surveillance system-based facial recognition rapidly gained in importance after the events of September 11, 2001. However:
A decade has passed, and face-recognition systems still perform miserably in real-world conditions. It’s true that in our digital photo libraries, and now on Facebook, pictures of the same person can be automatically tagged and collated with some accuracy. Indeed, in a recent test of face-recognition software sponsored by the National Institute of Standards and Technology, the best algorithms could identify faces more accurately than humans do—at least in controlled settings, in which the subjects look directly at a high-resolution camera, with no big smiles or other displays of feature-altering emotion. To crack the problem of real-time recognition, however, computers would have to recognize faces as they actually appear on video: at varying distances, in bad lighting, and in an ever-changing array of expressions and perspectives. Human eyes can easily compensate for these conditions, but our algorithms remain flummoxed. Given current technology, the prospects for picking out future Mohamed Attas in a crowd are hardly brighter than they were on 9/11. In 2007, recognition programs tested by the German federal police couldn’t identify eight of 10 suspects. Just this February, a couple that accidentally swapped passports at the airport in Manchester, England, sailed through electronic gates that were supposed to match their faces to file photos.
So researchers are approaching the problem from a different perspective, after having obtained what they believe is new insight into how the human visual processing system works:
Human faces are all built pretty much the same: two eyes above a nose that’s above a mouth, the features varying from person to person generally by mere millimeters. So what our brains look for, according to vision scientists, are the outlying features—those characteristics that deviate most from the ideal face we carry around in our heads, the running average of every visage we’ve ever seen. We code each new face we encounter not in absolute terms but in the several ways it differs markedly from the mean. In other words, to beat what vision scientists call the homogeneity problem, we accentuate what’s most important for recognition and largely ignore what isn’t. Our perception fixates on the upturned nose, rendering it more porcine, the sunken eyes or the fleshy cheeks, making them loom larger. To better identify and remember people, we turn them into caricatures.
To better mimic how our eyes and brains master facial recognition, then, you need to closely study how caricature artists have mastered their craft, as well as peruse their output. Take Pawan Sinha, director of MIT’s Sinha Laboratory for Vision Research, for example. As the article states, "his lab at MIT is preparing to computationally analyze hundreds of caricatures this year, from dozens of different artists, with the hope of tapping their intuitive knowledge of what is and isn’t crucial for recognition. He has named this endeavor the Hirschfeld Project, after the famous New York Times caricaturist Al Hirschfeld." The group is even formulating methods of analyzing the caricature artists' brains; an electroencephalogram, for example, would enable them to see which facial features are required to bring out an N170 (a neural response exhibited when people view a face). And an fMRI could detect whether the artist’s brain exhibited atypical neural activity while the caricaturist distinguished between familiar and unfamiliar faces.
Or take Charlie Frowd, a senior lecturer in psychology at the University of Central Lancashire in England. Again quoting from the article, he:
has used insights from caricature to develop a better police-composite generator. His system, called EvoFIT, produces animated caricatures, with each successive frame showing facial features that are more exaggerated than the last. Frowd’s research supports the idea that we all store memories as caricatures, but with our own personal degree of amplification. So as an animated composite depicts faces at varying stages of caricature, viewers respond to the stage that is most recognizable to them. In tests, Frowd’s technique has increased identification rates from as low as 3 percent to upwards of 30 percent.
Impressive early results. And the article's descriptions of various artists the author met while doing his research—the means by which they honed their talents and analyze a subject in creating a particular piece of work—are equally fascinating.
I commend the article to your attention, along with another recent writeup, 'What Surprises Do Identical Twins Have For Identity Science?,' which unfortunately only seems to be available to IEEE Computer Society subscribers (it's in the July 2011 issue). In summary, researchers from the University of Notre Dame (in conjunction with the FBI) collected facial images from identical twins who attended both the 2009 and 2010 editions of the Twin Days Festival, held each year in Twinsburg, Ohio, and are using the resultant dataset in an attempt to improve their facial recognition algorithms' accuracy.
Studies involving identical twins are one of the most intriguing areas of biometrics. Beyond the exotic allure associated with identical twins, distinguishing between them is among the most challenging technical problems researchers face…Identical twins represent less than one half of 1 percent of the global population. About 25 percent of identical twins are "mirror identical twins," which means that the left-to-right asymmetry of one mirror twin matches the right-to-left asymmetry of the other.
Yet, in spite of leveraging subject-unique markings such as moles and freckles, it seems that the researchers still have plenty of work ahead of them. According to an article figure caption, "Experiments using this data show that current state-of-the-art face recognition technology performs better than random guessing but is far below what is needed for successful application in most practical scenarios."
Back to the drawing board…or should that be cartoonist pad?