Zeblok Computational Demonstration of Autonomous Multi-cloud to Edge ML DevOps

Mouli Narayanan, Founder and CEO of Zeblok Computational, demonstrates the company’s latest edge AI and vision technologies and products in Intel’s booth at the 2022 Embedded Vision Summit. Specifically, Narayanan demonstrates a series of AI-MicroCloud workflows geared for enterprise architects, ML modelers, and ML Ops to improve automation in curating AI assets and disseminating optimized AI inference engines, including developer-ready workstations integrated with Intel tools such as OpenVINO and OneAPI.

Zeblok has solved the problem of scaling AI/ML at the edge, automating deployment of computer vision apps and other inference engines as AI-APIs to thousands of geographically dispersed edge locations. Command line interface and Python SDK-based capabilities provide deeper integration with enterprise business processes for packaging and delivering AI models developed from various AI frameworks and third-party AI ISVs. Performance measurement, monitoring, and lifecycle management capabilities further showcase a productive environment for MLOps team to manage edge AI APIs and edge AI applications at scale.

Most developers are accustomed to developing and training AI/ML models in public cloud environments. But none of the typical conveniences are available at the edge – no high-speed file systems, load balancers or orchestration – it’s just Intel at the edge. Whether on Advantech, Supermicro or other servers at the edge, your inferences will be running on Intel architecture. Since OpenVINO is integrated into Zeblok’s AI-WorkStation, it is easy to optimize your completed, trained AI/ML model for whatever chipset (Xeon, Movidius, etc.) is in use at a specific edge location. Zeblok’s AI-API Engine then enables creation of a chipset-specific AI API and automated deployment to thousands of edge servers. Zeblok’s AI-MicroCloud runs anywhere, but it is the integration of Intel tools and our automated AI API deployment that makes your AI apps sing.

One trillion edge devices deployed over the next decade will need low latency AI inferencing. But edge deployments are a challenge. Zeblok Computational built the AI-MicroCloud – a multi-cloud to edge ML DevOps and production platform – to enable our customers to mix and match AI ISVs and hardware vendors at scale to deliver edge AI applications while supporting an entire deployment lifecycle.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top