fbpx

Embedded Vision In Medicine: Let Smartphone Apps Inspire Your Design Decisions

med-design-tech-logo

By Brian Dipert
Editor-In-Chief
Embedded Vision Alliance
Senior Analyst
BDTI

This article was originally published by Medical Design Technology. It is reprinted here with the permission of the original publisher.

Embedded vision refers to the ability of equipment to extract meaning from (and appropriately respond to) visual inputs.  It's quickly becoming the hottest trend in electronics technology, fueled by the emergence of increasingly capable high-performance, energy-efficient and affordable processors, image sensors, memories and other semiconductor building blocks, along with optics, illumination LEDs and other subsystems. Advancements in these areas, along with software and algorithms, have enabled engineers to implement robust image analysis and understanding capabilities in a system that fits in the palm of your hand, versus the traditional approach of using high-end workstations.

The tantalizing potential of embedded vision technology is being tapped by divers applications; security setups of various types, automotive driver assistance systems supporting multiple functions, manufacturing line automation and inspection equipment, consumer electronics devices and other gesture interface-supportive gear, various facial detection and recognition implementations, etc. Medical equipment is another key embedded vision early-adopter. After all, in this era of ever-increasing pressure to reduce health care costs, any robust technology assistance to human medical caregivers, speeding and improving the accuracy of diagnoses, is welcomed.

How can you harness embedded vision capabilities in your next-generation medical equipment designs? For some clues, take a look at what clever software developers are doing with smartphones and tablets. These platforms are particularly compelling case studies, for multiple reasons. They're evolving at a torrid technology advancement pace. Their subsidized low prices encourage broad market adoption. And they're outfitted with formidable hardware:

  • High-performance single- and multi-core CPUs,
  • Robust graphics and imaging processors
  • Front- and rear-facing image sensors (sometimes even in "stereo" configurations) originally intended for videoconferencing and photography functions, but equally amenable to other vision-related uses, and
  • An abundance of both volatile and nonvolatile memory

Take Philips' Vital Signs Camera application, for example. Available for the iPhone 4S, iPad 2 and "new iPad", it uses the front-facing camera to observe the user's rising-and-falling chest cadence, thereby determining the respiration rate. And, by discerning the slight periodic variances in facial skin tone commensurate with ebbs and surges of blood flow, it measures pulse rate. In the latter respect, it's akin to finger-focused apps such as Azumio's Instant Heart Rate. And although the techniques might sound like science fiction, thereby provoking skepticism on your part, user ratings and personal observations suggest otherwise…indicating that these programs are remarkably accurate.

If food safety is your focus area, a UCLA researcher-developed prototype cameraphone add-on can be your trailblazer, discerning the concentration of E. coli in a liquid sample. And another camera phone accessory called Netra, this time originally developed at MIT, diagnoses various ocular abnormalities such as nearsightedness, farsightedness and astigmatism. What about skin abnormalities? Smartphones again come to the rescue. A $5 app from a Romanian company called Skin Scan analyzes and monitors the progression of potential melanoma spots, measuring their size, shape and color and logging their growth over time.

If alcohol abuse is your bailiwick, feel free to follow in the footsteps of BreathalEyes, yet another Apple iOS app. BreathalEyes calculates the degree of horizontal gaze nystagmus (HGN) in the user's eyes to assess the approximate blood alcohol content (BAC), and is claimed accurate to within a margin of error of +/- 0.02%, with BACs ranging from 0.02% to 0.17%. And if poor posture is provoking your product opportunity, leverage the lessons learned by another Philips implementation, this one not smartphone- or tablet-based but definitely still in the consumer electronics product space. The webcam-inclusive ErgoSensor computer display critiques users' distance from (and head orientation relative to) screen, along with neck position and in-seat duration between breaks.

Consider how much these developers have done within the cost, power consumption, form factor, weight and other constraints of a consumer electronics device. Now consider how much more robust your implementations can be, given their more robust hardware and software resources. For additional technology and product ideas, along with embedded vision educational and development assistance, regularly visit the Embedded Vision Alliance website at www.Embedded-Vision.com.

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, a worldwide organization of technology developers and providers, founded by BDTI (Berkeley Design Technology), for whom he is also employed as senior technology analyst. The mission of the Alliance is to provide engineers with practical education, information, and insights to help them incorporate embedded vision capabilities into products, thereby transforming embedded vision's compelling potential into reality in a rapid and efficient manner.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top