This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel.
We’re coming up on the second anniversary of Intel Distribution of OpenVINO toolkit, which makes this the perfect time to look at the past, present, and future of the toolkit that’s helping companies worldwide accelerate the performance of their AI and computer vision applications.
First, a little about the past: Intel Distribution of OpenVINO toolkit started as a computer vision and deep learning software development kit (SDK). It was an open source product from the start, initially used by printer vendors to analyze static images and in IoT edge solutions to analyze camera video streams. Over the past two years, the deep learning (DL) portion of the toolkit’s capabilities has grown by leaps and bounds with new features like integrated GStreamer for video pipeline development and support for DL Workbench, our GUI model optimization tool. Today, DL represents by far the largest area of the toolkit’s innovation.
It’s important to recognize that history, but what excites me most is the present and future of Intel Distribution of OpenVINO toolkit. Before my Intel career, I spent years as a developer evangelist at Apple, where I learned that there is no better way to show your commitment to developers than by listening to their needs and integrating their requests into the product. That’s what we’re seeing right now with the recent announcement of the toolkit’s first Long-Term Support (LTS) release.
LTS release gives developers what they want
As more and more companies adopt the toolkit to optimize their mission-critical IoT edge projects, like accelerating disease detection on X-rays, they’ve asked for software that will last longer, since equipment in edge deployments can last for years. That’s where the LTS release comes in.
The LTS release includes security patches for two years and critical bug fixes for one year. It’s also backward compatible. It’s a perfect fit if you’re near the end of your development cycle or partway through and don’t want to change code when the next standard release comes out. If you’re deploying a lot of edge units and want standardization with the toolkit’s existing features and functionality, this release is also for you.
Our standard releases continue to be a great option for early-stage development projects, with continual product improvements—including performance enhancements and new hardware support— that push the boundaries of AI at the edge. So, take your pick—both LTS and standard releases are available now.
The LTS release is just the latest example of the exciting advances we’ve been introducing in recent months, which together highlight the increasing momentum behind Intel Distribution of OpenVINO toolkit. Let’s take a look at a few others.
Online training for one million developers
In late 2019, we announced the Intel IoT Edge AI Nanodegree program with Udacity to train 1 million developers in AI development. The online training program will familiarize students with Intel Distribution of OpenVINO toolkit so they can facilitate deployment of AI at the edge.
Within a couple months of the announcement, nearly 30,000 people had applied for scholarships to this one-of-a-kind program—and no wonder. We’re giving developers a unique opportunity to quickly develop a critical skillset they’re going to need in order to take advantage of the opportunity for AI at the edge. Get started with the free Fundamentals course today.
Post-training optimization tools
If a mature software product can boost performance maybe 5 percent, or 10 percent, that’s great news. Remarkable. But given the dramatic advancements in state-of-the-art deep learning—and features like the post-training optimization tools we’re building into Intel Distribution of OpenVINO toolkit—we’re seeing performance double or even triple.
Many of our customers use these post-training optimizations, which make it possible to convert FP32 full-precision models into low-precision formats like int8. That means you can reduce latency, memory, and on-disk footprint without having to retrain your models. And that’s just one example of the deep learning optimization tools you can use to dramatically increase performance.
Try before you buy with Intel DevCloud for the Edge
Intel DevCloud for the Edge is another recent addition to the toolkit that’s drawing a lot of interest. This is because it allows you to prototype and test your AI models using Intel Distribution of OpenVINO toolkit on whichever Intel CPUs, iGPUs, VPUs, and FPGAs you want, with no setup or upfront hardware investment.
Results of our commitment
All these examples of the current state of Intel Distribution of OpenVINO toolkit demonstrate our commitment to helping developers bring more powerful solutions to market faster. Here are a few specific examples of how our partners are using the toolkit to help materialize the enormous promise of AI at the edge:
- ADLINK and LEDA Technology increased accuracy to over 90 percent on quality checks they perform on 4,000 contact lenses a day.
- mRobot made their robot, which carries medical supplies in hospitals, nearly 6x faster and more power efficient.
- Vispera ShelfSight, which identifies out-of-stock retail products, improved time to analysis by 10x.
The future looks even better
I’m excited to be spearheading the OpenVINO program at Intel because of all of this—because of the momentum and maturity we’re seeing with LTS, as well as in our partnership with Udacity and features like the post-training optimization tools and DevCloud for the Edge. I believe AI at the Edge is the next major transformation in technology. And I believe that Intel is driving that transformation forward by advancing the state of the art in deep learning and inference. Download it today to see for yourself.
Of course, It’s not just about what we’ve done so far. It’s also about what’s next, and you can bet the future will be filled with even more innovations and optimizations to further enhance performance. Intel has and will continue to develop hardware that’s purpose-built for the edge, including the current low-power Gen 2 Intel Movidius VPU, available now in form factors like the Intel Neural Compute Stick 2.
The next step up, Gen 3 Intel Movidius VPU, is coming soon, along with additional improvements to Intel’s CPU product roadmap—and those are just a few of the many innovations we can look forward to seeing as we head into year three of Intel Distribution of OpenVINO toolkit.