fbpx

Post-Capture Selective Focus: A Video-Capable DSLR Lets You Put The Concept To The Test

dof

Interested in trying out plenoptic light field camera technology, but don't have access to a Lytro camera (or for that matter, a Toshiba sensor prototype)? The Chaos Collective has developed a free online tool that enables you to approximate the approach using a recent-model DSLR that's capable of capturing not only still images but also video. Quoting from the developers' website:

Instead of capturing a single image through a single lens, Lytro uses a micro-lens array to capture lots of images at the same time. A light field engine then makes sense of all the different rays of light entering the camera and can use that information to allow you to refocus the image after it's been taken.

But since we only had a digital SLR hanging around the studio, we started looking at ways to achieve the same effect without needing micro-lens arrays and light field engines. The idea is simple; take lots of pictures back to back at various focal distances (collecting the same information, but over time). Then later, we can sweep through those images to pick out the exact focal distance we want to use.

But wait… A sequence of images is just a video! And since most digital SLRs these days make it super easy to capture video and manually adjust focus, that's all you need. Just hold the camera very still (a tripod is nice, but not necessary), shoot some video, and adjust the focus from near to far. That's it…All you need is a couple seconds of video (since video usually captures at 30 frames per second, that's easily 60+ levels of focal distance). And since you may want to embed the video for sharing, being short and sweet makes it smaller to move around the net…

…Once we had the video, the next step was to figure out how to make a simple tool that could process each frame of video and compute the clarity of focus for various points in the frame. We ended up using a 20×20 grid, giving us 400 selectable regions to play with. Making the grid finer is simple, but we noticed that making it too small actually made it harder to calculate focal clarity. The reason: we're looking at the difference between rough and smooth transitions in the image. If the grid is too small, smooth surfaces become difficult to accurately detect. Tighter grids also produce large embed code, so we stuck with 20×20 as a grid that's dense enough without introducing extra overhead.

For more information, check out the following additional coverage:

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top