fbpx
toshfocus2

Plenoptic camera technology, most commonly known nowadays by virtue of Lytro's ongoing promotion of the concept (and sales of the first-generation implementation), has received primary mainstream attention to date because the light field-based approach allows for post-capture selective focus on particular depth regions of an image. However, embedded vision advocates likely are alternatively intrigued by the technology's ability to dynamically ascertain a particular object's distance from the camera, as an alternative to today's more common stereo sensor, structured light and time-of-flight approaches. As BDTI senior engineer Shehrzad Qureshi commented in a recent email conversation:

Personally I find the 3D aspect to all of this more interesting than the "ex post facto refocusing" eye candy.

Embedded vision applications, of course, are the primary reason why the Alliance covers plenoptic (light-field) technology with the degree of regularity that it does. As such, a recent announcement from Toshiba caught my eye. Initially reported in Japan's Asahi Shimbun, the company has developed a prototype image sensor of dimensions 5mm x 7mm (with un-reported raw resolution). Ahead of it is a 500,000-element micro-lens array, with each lens 0.03 mm in diameter, in aggregate implementing the plenoptic function. And of particular interest to embedded vision folks:

The new camera accurately measures the distance to an object based on the differences among the small images, as do cameras with two lenses that are used to create 3-D images.

It can set the focus on objects both far and near by magnifying and superimposing only well-captured parts of the small images. Unlike traditional cameras, the new camera can create pictures that are focused on every single part of the image.

The module-equipped camera can also be used to take videos, and allows the users to retain the image of a figure in the foreground while replacing the background.

Toshiba aspires to commercialize the technology by the end of next year. However, as with Lytro's product (and as the above images showcase), effective resolution remains a limitation, although perhaps not to the degree with embedded vision that it is with consumer photography. Underneath each microlens is a cluster of pixels, one for each captured depth. Therefore, the aggregate image resolution at each focus point is only 0.5 Mpixels.

For more information, check out the following additional coverage:

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top