Eye-scanning technology is becoming increasingly pervasive as a biometric technique for individual identification, both in Hollywood ("Minority Report" and other films) and in real-life (an India case study that I wrote about last September, for example, or its use by NATO troops in Afghanistan). The generic term actually encompasses two different pattern-recognition implementations:

Recent news, however, raises fundamental questions regarding the techniques' effectiveness. First came the results of a recent study at Notre Dame, which I came across at both DailyTech and Slashdot, which concluded that iris patterns do indeed change over time. A failure percentage increase of 153% over three years sounds dramatic, until you realize that this translates into an absolute failure increase from 1 per 2 million to 2.5 per two million. Still, when you consider that roughly 200 million individuals have already been iris-scanned in India alone, the failure rate becomes more notable.

More recently, thanks again to Slashdot (along with The Verge and Wired), I learned that (akin to photographs that fool facial recognition systems) it's possible to use a fine-detailed image of the eye to spoof biometric scanners. Researchers at the Universidad Autonoma de Madrid (UAM), Spain, created synthetic iris images that were able to fool an eye-scan security system more than 80 percent of the time. Interestingly, at least to me, the coverage at The Verge notes:

For security purposes, iris-scanning machines don't keep actual images of a verified person's eye. Instead, they use nearly 5,000 points of data on the unique aspects of person's iris to check against later.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top