Synopsys

www.synopsys.com

Synopsys, Inc. (Nasdaq:SNPS) is a world leader in electronic design automation (EDA), supplying the global electronics market with the software, intellectual property (IP) and services used in semiconductor design, verification and manufacturing.Manufacturers of embedded vision products require the highest performance, energy optimized processors that are integrated well with their host microcontrollers. Synopsys Processor Designer and DesignWare ARC provide the optimized embedded vision processing subsystem whereas Synopsys Platform Architect, Synopsys Virtualizer™ and HAPS® deliver the prototyping solutions for successful optimization and integration into the entire system on chip (SoC).These technology-leading solutions help give Synopsys customers a competitive edge in bringing the best products to market quickly while reducing costs and schedule risk. Synopsys is headquartered in Mountain View, California, and has approximately 80 offices located throughout North America, Europe, Japan, Asia and India.

Recent Posts by Company

Synaptics Announces Industry-First Edge Computing Video SoCs with Secure AI Framework at CES 2020

New Multimodal Platform Purpose Built with Perceptive Intelligence for Applications Including Smart Displays, Smart Cameras, Video Soundbars, Media Streamers CES 2020, LAS VEGAS, and SAN JOSE, Calif., Jan. 6, 2019 – Synaptics® Incorporated (NASDAQ: SYNA), the leading developer of human interface solutions, today announced a new Smart Edge AI™ platform, the VideoSmart™ VS600 family of …

Synaptics Announces Industry-First Edge Computing Video SoCs with Secure AI Framework at CES 2020 Read More +

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys

Bert Moons, Hardware Design Architect at Synopsys, presents the "Five+ Techniques for Efficient Implementation of Neural Networks" tutorial at the May 2019 Embedded Vision Summit. Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep …

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys Read More +

“Fundamental Security Challenges of Embedded Vision,” a Presentation from Synopsys

Mike Borza, Principal Security Technologist at Synopsys, presents the "Fundamental Security Challenges of Embedded Vision" tutorial at the May 2019 Embedded Vision Summit. As facial recognition, surveillance and smart vehicles become an accepted part of our daily lives, product and chip designers are coming to grips with the business need to secure the data that …

“Fundamental Security Challenges of Embedded Vision,” a Presentation from Synopsys Read More +

“Making Cars That See — Failure is Not an Option,” a Presentation from Synopsys

Burkhard Huhnke, Vice President of Automotive Strategy for Synopsys, presents the "Making Cars That See—Failure is Not an Option" tutorial at the May 2019 Embedded Vision Summit. Drivers are the biggest source of uncertainty in the operation of cars. Computer vision is helping to eliminate human error and make the roads safer. But 14 years …

“Making Cars That See — Failure is Not an Option,” a Presentation from Synopsys Read More +

Figure5

Combining an ISP and Vision Processor to Implement Computer Vision

An ISP (image signal processor) in combination with one or several vision processors can collaboratively deliver more robust computer vision processing capabilities than vision processing is capable of providing standalone. However, an ISP operating in a computer vision-optimized configuration may differ from one functioning under the historical assumption that its outputs would be intended for …

Combining an ISP and Vision Processor to Implement Computer Vision Read More +

Figure6

Multi-sensor Fusion for Robust Device Autonomy

While visible light image sensors may be the baseline "one sensor to rule them all" included in all autonomous system designs, they're not necessarily a sole panacea. By combining them with other sensor technologies: "Situational awareness" sensors; standard and high-resolution radar, LiDAR, infrared and UV, ultrasound and sonar, etc., and "Positional awareness" sensors such as …

Multi-sensor Fusion for Robust Device Autonomy Read More +

Computer Vision for Augmented Reality in Embedded Designs

Augmented reality (AR) and related technologies and products are becoming increasingly popular and prevalent, led by their adoption in smartphones, tablets and other mobile computing and communications devices. While developers of more deeply embedded platforms are also motivated to incorporate AR capabilities in their products, the comparative scarcity of processing, memory, storage, and networking resources …

Computer Vision for Augmented Reality in Embedded Designs Read More +

“Designing Smarter, Safer Cars with Embedded Vision Using EV Processor Cores,” a Presentation from Synopsys

Fergus Casey, R&D Director for ARC Processors at Synopsys, presents the "Designing Smarter, Safer Cars with Embedded Vision Using Synopsys EV Processor Cores" tutorial at the May 2018 Embedded Vision Summit. Consumers, the automotive industry and government regulators are requiring greater levels of automotive functional safety with each new generation of cars. Embedded vision, using …

“Designing Smarter, Safer Cars with Embedded Vision Using EV Processor Cores,” a Presentation from Synopsys Read More +

“New Deep Learning Techniques for Embedded Systems,” a Presentation from Synopsys

Tom Michiels, System Architect for Embedded Vision at Synopsys, presents the "New Deep Learning Techniques for Embedded Systems" tutorial at the May 2018 Embedded Vision Summit. In the past few years, the application domain of deep learning has rapidly expanded. Constant innovation has improved the accuracy and speed of learning and inference. Many techniques are …

“New Deep Learning Techniques for Embedded Systems,” a Presentation from Synopsys Read More +

Implementing Vision with Deep Learning in Resource-constrained Designs

DNNs (deep neural networks) have transformed the field of computer vision, delivering superior results on functions such as recognizing objects, localizing objects within a frame, and determining which pixels belong to which object. Even problems like optical flow and stereo correspondence, which had been solved quite well with conventional techniques, are now finding even better …

Implementing Vision with Deep Learning in Resource-constrained Designs Read More +

Figure2

Software Frameworks and Toolsets for Deep Learning-based Vision Processing

This article provides both background and implementation-detailed information on software frameworks and toolsets for deep learning-based vision processing, an increasingly popular and robust alternative to classical computer vision algorithms. It covers the leading available software framework options, the root reasons for their abundance, and guidelines for selecting an optimal approach among the candidates for a …

Software Frameworks and Toolsets for Deep Learning-based Vision Processing Read More +

“Designing Scalable Embedded Vision SoCs from Day 1,” a Presentation from Synopsys

Pierre Paulin, Director of R&D for Embedded Vision at Synopsys, presents the "Designing Scalable Embedded Vision SoCs from Day 1" tutorial at the May 2017 Embedded Vision Summit. Some of the most critical embedded vision design decisions are made early on and affect the design’s ultimate scalability. Will the processor architecture support the needed vision …

“Designing Scalable Embedded Vision SoCs from Day 1,” a Presentation from Synopsys Read More +

“Moving CNNs from Academic Theory to Embedded Reality,” a Presentation from Synopsys

Tom Michiels, System Architect for Embedded Vision Processors at Synopsys, presents the "Moving CNNs from Academic Theory to Embedded Reality" tutorial at the May 2017 Embedded Vision Summit. In this presentation, you will learn to recognize and avoid the pitfalls of moving from an academic CNN/deep learning graph to a commercial embedded vision design. You …

“Moving CNNs from Academic Theory to Embedded Reality,” a Presentation from Synopsys Read More +

Facial Analysis Delivers Diverse Vision Processing Capabilities

Computers can learn a lot about a person from their face – even if they don’t uniquely identify that person. Assessments of age range, gender, ethnicity, gaze direction, attention span, emotional state and other attributes are all now possible at real-time speeds, via advanced algorithms running on cost-effective hardware. This article provides an overview of …

Facial Analysis Delivers Diverse Vision Processing Capabilities Read More +

“Using the OpenCL C Kernel Language for Embedded Vision Processors,” a Presentation from Synopsys

Seema Mirchandaney, Engineering Manager for Software Tools at Synopsys, presents the "Using the OpenCL C Kernel Language for Embedded Vision Processors" tutorial at the May 2016 Embedded Vision Summit. OpenCL C is a programming language that is used to write computation kernels. It is based on C99 and extended to support features such as multiple …

“Using the OpenCL C Kernel Language for Embedded Vision Processors,” a Presentation from Synopsys Read More +

Figure3

Deep Learning for Object Recognition: DSP and Specialized Processor Optimizations

Neural networks enable the identification of objects in still and video images with impressive speed and accuracy after an initial training phase. This so-called "deep learning" has been enabled by the combination of the evolution of traditional neural network techniques, with one latest-incarnation example known as a CNN (convolutional neural network), by the steadily increasing …

Deep Learning for Object Recognition: DSP and Specialized Processor Optimizations Read More +

“Programming Embedded Vision Processors Using OpenVX,” a Presentation from Synopsys

Pierre Paulin, Senior R&D Director for Embedded Vision at Synopsys, presents the "Programming Embedded Vision Processors Using OpenVX" tutorial at the May 2016 Embedded Vision Summit. OpenVX, a new Khronos standard for embedded computer vision processing, defines a higher level of abstraction for algorithm specification, with the goal of enabling platform and tool innovation in …

“Programming Embedded Vision Processors Using OpenVX,” a Presentation from Synopsys Read More +

May 2015 Embedded Vision Summit Technical Presentation: “Low-power Embedded Vision: A Face Tracker Case Study,” Pierre Paulin, Synopsys

Pierre Paulin, R&D Director for Embedded Vision at Synopsys, presents the "Low-power Embedded Vision: A Face Tracker Case Study" tutorial at the May 2015 Embedded Vision Summit. The ability to reliably detect and track individual objects or people has numerous applications, for example in the video-surveillance and home entertainment fields. While this has proven to …

May 2015 Embedded Vision Summit Technical Presentation: “Low-power Embedded Vision: A Face Tracker Case Study,” Pierre Paulin, Synopsys Read More +

“Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation,” a Presentation From Synopsys

Bruno Lavigueur, Project Leader for Embedded Vision at Synopsys, presents the "Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation" tutorial at the May 2015 Embedded Vision Summit. Deep learning-based object detection using convolutional neural networks (CNN) has recently emerged as one of the leading approaches for achieving state-of-the-art detection accuracy for a wide range of …

“Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation,” a Presentation From Synopsys Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Combining Flexibility and Low-Power in Embedded Vision Subsystems: An Application to Pedestrian Detection,” Bruno Lavigueur, Synopsys

Bruno Lavigueur, Embedded Vision Subsystem Project Leader at Synopsys, presents the "Combining Flexibility and Low-Power in Embedded Vision Subsystems: An Application to Pedestrian Detection" tutorial at the May 2014 Embedded Vision Summit. Lavigueur presents an embedded-mapping and refinement case study of a pedestrian detection application. Starting from a high-level functional description in OpenCV, he decomposes …

May 2014 Embedded Vision Summit Technical Presentation: “Combining Flexibility and Low-Power in Embedded Vision Subsystems: An Application to Pedestrian Detection,” Bruno Lavigueur, Synopsys Read More +

johnday-blog

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions

This article was originally published at John Day's Automotive Electronics News. It is reprinted here with the permission of JHDay Communications. Thanks to the emergence of increasingly capable and cost-effective processors, image sensors, memories and other semiconductor devices, along with robust algorithms, it's now practical to incorporate computer vision into a wide range of embedded …

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Designing a Multi-Core Architecture Tailored for Pedestrian Detection Algorithms,” Tom Michiels, Synopsys

Tom Michiels, R&D Manager at Synopsys, presents the "Designing a Multi-Core Architecture Tailored for Pedestrian Detection Algorithms" tutorial within the "Algorithms and Implementations" technical session at the October 2013 Embedded Vision Summit East. Pedestrian detection is an important function in a wide range of applications, including automotive safety systems, mobile applications, and industrial automation. A …

October 2013 Embedded Vision Summit Technical Presentation: “Designing a Multi-Core Architecture Tailored for Pedestrian Detection Algorithms,” Tom Michiels, Synopsys Read More +

SynopsysLogo268_AI_rescaled

Another Upcoming Synopsys Embedded Vision Seminar: Application-Specific Processor Design and Prototyping

Following up on a recent news posting regarding an upcoming Japan-based event, Embedded Vision Alliance member company Synopsys also has a U.S.-based seminar coming up in the near future.

April 2013 Embedded Vision Summit Technical Presentation: “Lessons Learned: FPGA Prototyping of a Processor-Based Embedded Vision Application,” Markus Wloka, Synopsys

Markus Wloka, R&D Director for System-Level Solutions at Synopsys, presents the "Lessons Learned: FPGA Prototyping of a Processor-Based Embedded Vision Application" tutorial within the "Developing Vision Software, Accelerators and Systems" technical session at the April 2013 Embedded Vision Summit. This presentation covers the steps of building a programmable vision system and highlights the importance of …

April 2013 Embedded Vision Summit Technical Presentation: “Lessons Learned: FPGA Prototyping of a Processor-Based Embedded Vision Application,” Markus Wloka, Synopsys Read More +

SynopsysLogo268_AI_rescaled

Upcoming Synopsys Seminar Showcases DSP Development for Embedded Vision Processing

On Friday, May 31, from 10:00am to 6:30pm (Tokyo, Japan), Embedded Vision Alliance member Synopsys will present a seminar exploring application-specific processor (ASIP) design. According to Synopsys, ASIPs are ideal for embedded vision applications where real-time performance, low power and programmability are required.

SynopsysLogoRescaled

The Synopsys Vision Processor Starter Kit: Audition An Online Webinar To Learn More About It

Bo Wu, the Technical Marketing Manager of Synopsys' Systems Group, is a name that is hopefully already familiar to many of you.

September 2012 Embedded Vision Summit Presentation: “Optimization and Acceleration for OpenCV-Based Embedded Vision Applications,” Bo Wu, Synopsys

Bo Wu, Technical Marketing Manager at Synopsys, presents the "Optimization and Acceleration for OpenCV-Based Embedded Vision Applications" tutorial within the "Using Tools, APIs and Design Techniques for Embedded Vision" technical session at the September 2012 Embedded Vision Summit.

September 2012 Embedded Vision Summit Panel: Avnet, CogniVue, and Synopsys

Jeff Bier, Embedded Vision Alliance Founder, moderates a discussion panel comprised of the presenters in the "Embedded Vision Applications and Algorithms" technical session at the September 2012 Embedded Vision Summit; Mario Bergeron from Avnet, Simon Morris from CogniVue, and Bo Wu from Synopsys. The panelists discuss topics such as the ability to derive depth approximation …

September 2012 Embedded Vision Summit Panel: Avnet, CogniVue, and Synopsys Read More +

synopsys_600

Synopsys And Embedded Vision: A Multi-Faceted Product Line

If you've looked closely at the Embedded Vision Alliance member page beginning earlier today, you might have noticed two new entries; Synopsys and VanGogh Imaging. Welcome to both companies!

logo_2020

May 18 - 21, Santa Clara, California

The preeminent event for practical, deployable computer vision and visual AI, for product creators who want to bring visual intelligence to products.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 North California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top //