fbpx

Edge AI and Vision Insights: August 5, 2020 Edition

LETTER FROM THE EDITOR
Dear Colleague,Intel Workshops

Intel, a Premier Sponsor of the upcoming Embedded Vision Summit, will be delivering three workshops during the event:

Check out the technology workshops page on the Summit website for more information and online registration on these three workshops, along with the workshop “Beyond 2020—Vision SoCs for the Edge” from Synopsys. And while you’re there, be sure to also register for the Embedded Vision Summit, taking place online September 15-25!

On August 20, 2020, Jeff Bier, founder of the Edge AI and Vision Alliance, will present two sessions of a free one-hour webinar, “Key Trends in the Deployment of Edge AI and Computer Vision”. The first session will take place at 9 am PT (noon ET), timed for attendees in Europe and the Americas. The second session, at 6 pm PT (9 am China Standard Time on August 21), is timed for attendees in Asia. With so much happening in edge AI and computer vision applications and technology, and happening so fast, it can be difficult to see the big picture. This webinar from the Alliance will examine the four most important trends that are fueling the proliferation of edge AI and vision applications and influencing the future of the industry:

  • Deep learning – including a focus on the key challenges of obtaining sufficient training data and managing workflows.
  • Streamlining edge development – thanks to cloud computing and higher levels of abstraction in both hardware and software, it is now much easier than ever before for developers to implement AI and vision capabilities in edge devices.
  • Fast, cheap, energy-efficient processors – massive investment in specialized processors is paying off, delivering 1000x improvements in performance and efficiency, enabling AI and vision to be deployed even in very cost- and energy-constrained applications at the edge.
  • New sensors – the introduction of new 3D optical, thermal, neuromorphic and other advanced sensor technologies into high-volume applications like mobile phones and automobiles has catalyzed a dramatic acceleration in innovation, collapsing the cost and complexity of implementing visual perception.

Bier will explain what’s fueling each of these key trends, and will highlight key implications for technology suppliers, solution developers and end-users. He will also provide technology and application examples illustrating each of these trends, including spotlighting the winners of the Alliance’s 2020 Vision Product of the Year Awards. A question-and-answer session will follow the presentation. See here for more information, including online registration.

Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance

COVID-19 (THE NOVEL CORONAVIRUS) AND COMPUTER VISION

Face Mask Detection in Street Camera Video Streams Using AI: Behind the CurtainTryolabs
In the new world of the coronavirus, according to Braulio Ríos and Marcos Toscano, two machine learning engineers at Tryolabs, numerous multidisciplinary efforts have been organized to slow the pandemic’s spread. The AI community has been a key part of many of these endeavors. Developments for monitoring social distancing and identifying face masks have particularly made headlines.

The pressure to show results as quickly as possible, along with inevitable AI “overpromising” by some developers and implementers, unfortunately often sends a misguided message: that solving these use cases is trivial thanks to AI’s “mighty powers”. To paint a more accurate and complete picture, in this technical article the company details the creative process behind a computer vision-based solution for one example application:

  • Detecting people that pass in front of a security camera
  • Identifying face mask usage, and
  • Collecting reliable statistics (% of people wearing masks)

Deep Learning for Medical Imaging: COVID-19 DetectionMathworks
In this guest blog post published by MathWorks, Dr. Barath Narayanan from the University of Dayton Research Institute, along with his colleague Dr. Russell C. Hardie from the University of Dayton, provide a detailed description of how to implement a deep learning-based technique for detecting COVID-19 on chest radiographs. Their approach leverages MathWorks’ MATLAB along with an image dataset curated by Dr. Joseph Cohen, a postdoctoral fellow at the University of Montreal.

 

 

Huiying Medical: Helping Combat COVID-19 with AI TechnologyIntel
This blog post from Intel discusses how Huiying Medical’s AI-based medical imaging diagnostics algorithms analyze CT chest scans to combat the novel coronavirus. Huiying Medical’s AI-powered CT imaging solution can be deployed in the cloud or on premise and is able to achieve as high as 96% accuracy in classification of Novel Coronavirus Pneumonia (NCP). Combined with the computing power provided by Intel processors and AI neural network models, it takes only 2-3 seconds to process a CT study with 500 images.

UPCOMING INDUSTRY EVENTS

Edge AI and Vision Alliance Webinar – Key Trends in the Deployment of Edge AI and Computer Vision: August 20, 2020, 9:00 am PT and 6:00 pm PT

Embedded Vision Summit: September 15-25, 2020

More Events

FEATURED NEWS

OpenCV.org Launches an Affordable Smart Camera with a Kickstarter Campaign

STMicroelectronics Enables Innovative Social-Distancing Applications with FlightSense Time-of-Flight Proximity Sensors

Upcoming Online Presentations from Codeplay Software Help You Develop SYCL Code

Multiple Shipping Khronos OpenXR-conformant Systems Deliver XR Application Portability

Allied Vision’s 12.2 Mpixel Alvium 1800 USB Camera Integrates a Rolling-shutter Small-pixel Sensor

More News

VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Morpho Semantic Filtering (Best AI Software or Algorithm)Morpho
Morpho’s Semantic Filtering is the 2020 Vision Product of the Year Award Winner in the AI Software and Algorithms category. Semantic Filtering improves camera image quality by combining the best of AI-based segmentation and pixel processing filters. In conventional imaging, computational photography algorithms are typically applied to the entire image, which can sometimes cause unwanted side effects such as loss of detail and textures, as well as in the appearance of noise in certain areas. Morpho’s Semantic Filtering is trained to identify the meaning of each pixel in the object of interest, allowing the application of the right algorithm for each category, with different strength levels that are most effective to achieve the best image quality for still-image capture.

Please see here for more information on Morpho and its Semantic Filtering. The Vision Product of the Year Awards are open to Member companies of the Edge AI and Vision Alliance and celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.

EMBEDDED VISION SUMMIT SPONSOR SHOWCASE

Attend the Embedded Vision Summit to meet this and other leading computer vision and edge AI technology suppliers!

Microsoft M12Microsoft M12
M12 is Microsoft’s venture fund that empowers B2B entrepreneurs through investments, insight, and unparalleled access to Microsoft. M12 invests in early-stage startups transforming the enterprise with a focus on AI and ML, big data and analytics, business SaaS, cloud infrastructure, productivity and communication, security, and frontier technologies. M12 will be sponsoring the Start-up Zone area of the Technology Exhibits during the Embedded Vision Summit.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top