APPLICATIONS

“Shifts in the Automated Driving Industry,” a Presentation from AImotive

László Kishonti, CEO of AImotive, presents the "Shifts in the Automated Driving Industry" tutorial at the May 2019 Embedded Vision Summit. 2018 will have a lasting effect on the self-driving industry, as key stakeholders have turned from the unattainable goal of full autonomy by 2021 to more realistic development and productization roadmaps. This will in […]

“Shifts in the Automated Driving Industry,” a Presentation from AImotive Read More +

May 2019 Embedded Vision Summit Slides

The Embedded Vision Summit was held on May 20-23, 2019 in Santa Clara, California, as an educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations delivered at the Summit are listed below. All of the slides from these presentations are included in… May 2019 Embedded Vision Summit

May 2019 Embedded Vision Summit Slides Read More +

image1-lrg

Intel Neural Compute Stick 2 for Medical Imaging

This blog post was originally published at Intel's website. It is reprinted here with the permission of Intel. Intel has been an integral part of hospital technology for almost 50 years. From desktop computers to MRI scanners, diagnostic monitors, and even portable X-Ray machines, we have been at the forefront of healthcare transformation. So it

Intel Neural Compute Stick 2 for Medical Imaging Read More +

Intel Demonstration of RealSense Cameras for Visual Navigation of Wheeled Autonomous Robots

Roy Karunakaran, Director of Marketing for RealSense at Intel, delivers a product demonstration at the January 2019 Vision Industry and Technology Forum. Specifically, Karunakaran demonstrates the use of the Intel® RealSense™ Cameras T265 and D435 in wheeled autonomous robots. The RealSense Camera T265 is used to enable Visual SLAM, while the D435 enables obstacle avoidance.

Intel Demonstration of RealSense Cameras for Visual Navigation of Wheeled Autonomous Robots Read More +

Intel-RealSense-T265-1_600

Upcoming Silicon Valley Meetup Presentations Discuss Visual Navigation in Robots

Intel and the Bay Area Computer Vision and Deep Learning Meetup Group would like to invite you to a pair of talks taking place on the evening of Wednesday, February 13, 2019. Intel and one of its customers, Marble, will each share perspectives on the topic of navigation and robots, both from the company who

Upcoming Silicon Valley Meetup Presentations Discuss Visual Navigation in Robots Read More +

Figure6

Multi-sensor Fusion for Robust Device Autonomy

While visible light image sensors may be the baseline “one sensor to rule them all” included in all autonomous system designs, they’re not necessarily a sole panacea. By combining them with other sensor technologies: “Situational awareness” sensors; standard and high-resolution radar, LiDAR, infrared and UV, ultrasound and sonar, etc., and “Positional awareness” sensors such as

Multi-sensor Fusion for Robust Device Autonomy Read More +

jhh-mercedes_600

Mercedes-Benz, NVIDIA to Create New AI Architecture for Mercedes Vehicles

NVIDIA’s Jensen Huang, Mercedes’ Sajjad Khan unveil vision for software-defined AI cars integrating self-driving, intelligent cockpits. Mercedes-Benz announced today it has selected NVIDIA to help realize its vision for next-generation vehicles. Speaking to a packed crowd at the Mercedes-Benz booth on the first day of CES 2019, Mercedes-Benz Executive Vice President Sajjad Khan and NVIDIA

Mercedes-Benz, NVIDIA to Create New AI Architecture for Mercedes Vehicles Read More +

Using Calibration to Translate Video Data to the Real World

This article was originally published at NVIDIA's website. It is reprinted here with the permission of NVIDIA. DeepStream SDK 3.0 is about seeing beyond pixels. DeepStream exists to make it easier for you to go from raw video data to metadata that can be analyzed for actionable insights. Calibration is a key step in this

Using Calibration to Translate Video Data to the Real World Read More +

“Harnessing Cloud Computer Vision In a Real-time Consumer Product,” a Presentation from Cocoon Cam

Pavan Kumar, Co-founder and CTO at Cocoon Cam, delivers the presentation "Harnessing Cloud Computer Vision In a Real-time Consumer Product," at the Embedded Vision Alliance's September 2018 Vision Industry and Technology Forum. Kumar explains how his successful start-up is using edge and cloud vision computing to bring amazing new capabilities to the previously stagnant product

“Harnessing Cloud Computer Vision In a Real-time Consumer Product,” a Presentation from Cocoon Cam Read More +

“Embedded AI for Smart Cities and Retail in China,” a Presentation from Horizon Robotics

Yufeng Zhang, VP of Global Business at Horizon Robotics, presents the “Embedded AI for Smart Cities and Retail in China,” tutorial at the May 2018 Embedded Vision Summit. Over the past ten years, online shopping has changed the way we do business. Now, with the development of AI technology, we are seeing the beginning of

“Embedded AI for Smart Cities and Retail in China,” a Presentation from Horizon Robotics Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top