Multimodal

Ambarella’s Next-Gen AI SoCs for Fleet Dash Cams and Vehicle Gateways Enable Vision Language Models and Transformer Networks Without Fan Cooling

Two New 5nm SoCs Provide Industry-Leading AI Performance Per Watt, Uniquely Allowing Small Form Factor, Single Boxes With Vision Transformers and VLM Visual Analysis SANTA CLARA, Calif., May 21, 2024 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during AutoSens USA, the latest generation of its AI systems-on-chip (SoCs) for in-vehicle […]

Ambarella’s Next-Gen AI SoCs for Fleet Dash Cams and Vehicle Gateways Enable Vision Language Models and Transformer Networks Without Fan Cooling Read More +

AiM Future Brings GenAI Applications to Mainstream Consumer Devices

Seoul, Korea, and San Jose, CA – May 15, 2024 – AiM Future, a leading provider of concurrent multimodal inference accelerators for edge and endpoint devices, has just announced the launch of its next-generation Generative AI Architecture, “GAIA,” and Synabro software development kit. These GAIA-based accelerators are designed to enable energy-efficient transformers and large language

AiM Future Brings GenAI Applications to Mainstream Consumer Devices Read More +

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Generative AI: How Will It Impact Edge Applications and Machine Perception?” Expert Panel at the May 2023 Embedded Vision Summit. Other panelists include Greg Kostello, CTO and Co-Founder of Huma.AI, Vivek Pradeep, Partner Research Manager at Microsoft, Steve Teig, CEO of Perceive, and Roland Memisevic, Senior

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion Read More +

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman

Kristen Grauman, Professor at the University of Texas at Austin and Research Director at Facebook AI Research, presents the “Frontiers in Perceptual AI: First-person Video and Multimodal Perception” tutorial at the May 2023 Embedded Vision Summit. First-person or “egocentric” perception requires understanding the video and multimodal data that streams from wearable cameras and other sensors.

“Frontiers in Perceptual AI: First-person Video and Multimodal Perception,” a Keynote Presentation from Kristen Grauman Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top