fbpx

ADAS in 2024: Don’t Expect Clarity on Autonomy and Safety

  • Will OEMs continue to treat the world’s most valuable car maker, Tesla, as a role model – beyond electrification and over-the-air (OTA) updates?

  • What’s at stake: If 2023 marked the public’s disillusionment with robotaxis, 2024 augurs a big shift toward advanced driver assistance systems (ADAS) crammed with automated features. Expect the auto industry to play high-stakes games on the safety of highly automated driving, the accelerated use of embedded artificial intelligence, and a fresh emphasis on in-vehicle comfort and convenience.

The $64,000 question in 2024 boils down to this: what sort of future – vehicle platforms and applications – is envisioned by carmakers not named Tesla? Are these carmakers with Tesla, or prepared to chart their own destiny?

Big questions include: will they continue to treat the world’s most valuable car maker as a role model – beyond electrification and over-the-air (OTA) updates? Will they follow Tesla’s lead in minimizing sensor modalities, getting away with the least effective driver monitoring systems? Will they pursue more aggressive automation features, with no qualms about blaming human drivers for crashes caused by autonomy failures?

The jury is out, but technology suppliers’ agenda is clear. In efforts to help OEMs integrate more automated features into new vehicles ranging from L2, L2+ and L2++ to L3, they are singing the same tune:

  • a lot more computing power than car OEMs had ever thought necessary just a few years ago
  • a stronger push for artificial intelligence via chips capable of processing more advanced neural networks
  • a single platform – software stack and hardware – enabling various levels of autonomy.
  • a growing need for various sensor modalities – not just vision and radar, but also lidars and thermal imaging

Ambarella’s recently announced “Central AI Domain Controller” hits many of these points.

Nonetheless, the auto industry’s choice for an ADAS vehicle computing architecture is hardly settled. There are still many options open for exploration in 2024.

“We’re seeing multiple different approaches for ADAS domain controllers,” observed Ian Riches, vice president of automotive practice at TechInsights. “Some are using a single ‘large’ SoC, such as one from Nvidia, and others are using multiple ‘smaller’ chips, e.g. lower-tier Mobileye’s EyeQ processors, NXP’s S32V, Texas Instruments’ TDA2 etc.” The competitive position is growing more “complex and evolving.”

In-cabin technologies

Given that ADAS vehicles (L2, L2+, L2++) are driven by a human collaborating with a computer driver, it is paramount for highly automated vehicles to monitor what a human driver does.

There is hope that Tesla’s recent “voluntary recall” of two million cars is sending a message to OEMs to install:

  • an in-vehicle camera with infrared illumination, to watch drivers’ eye movement even at night
  • a processor with face- and eye-tracking algorithms to measure a driver’s head position, gaze and eye closure to detect fatigue or distraction.

Despite Tesla’s reluctance to add hardware, in-cabin cameras with IR illumination are commonly available. DMS algorithms by Seeing Machines, Smart Eye and others, designed to watch eye gaze, keep advancing. Canadian Tier One, Magna, will launch in 2024 a new rearview mirror integrated with a driver-monitoring system. Ford’s BlueCruise and GM’s Super Cruise already implement well-designed DMS, while their automated driving features  work only on approved roads.

But here’s the rub.

If Tesla believes that over-the-air software updates that nag drivers more frequently to keep their eyes on the road is what it takes to fix Autopilot defects and is getting away with it, what could stop other carmakers from cutting corners and emulatingTesla?

The year 2024 will thus test which carmakers are aware of the inevitable quagmire posed by the persistent challenge of machine-to-human handovers.

Meanwhile, watch out for more L2+++ models – masquerading as L3 vehicles – to emerge in 2024. Carmakers might call them L3, but they remain reluctant to state who is responsible when a hands-off, eyes-off, “self-driving” vehicle makes a mistake.

‘Digital living room on wheels’

Safety is the reason why in-cabin technology is needed. But get ready for a chorus of automakers singing the advent of the “digital living room on wheels” in 2024.

In an interview with The Ojo-Yoshida Report, Bart Plackle, vice president for automotive at Imec, explained that a car “offers you probably the best intuitive and immersive digital experience” you could ever have.

With the right capabilities, connectivity, and performance, Plackle foresees a digital vehicle that serves as “your digital office on wheels or, or home.”

Companies like DTS, Harman and Garmin are all hot to provide each passenger with a personalized digital experience. When they bring their device to a car, the car can resume a streaming video from where it left off at home — seamlessly, of course. Sounds easy, but the software complexity this entails is huge, given that OEMs don’t necessarily know in advance which devices (running on what software/hardware platform) passengers will carry to the car.

Nonetheless, making vehicles smarter, enhancing both better safety and an “immersive” experience demands that OEMs equip vehicles with more processing performance, as Plackle explained.

Qualcomm is already promoting its Snapdragon Cockpit Platform to allow carmakers to “create highly immersive and intuitive experiences that can be scaled across vehicle tiers and personalized for every occupant.”

Similarly, Nvidia is pitching “an intelligent experience for every passenger” via the Nvidia Drive Concierge platform. Nvidia says this “delivers an advanced infotainment system as well as acts as every passenger’s digital assistant.”

Expect these products to flourish.

Brittleness and huge variability in AI

In its agenda for 2024, first and foremost, the automotive industry must balance a growing need for safety (which is hard to measure) with accelerated demand for personalized, in-vehicle digital entertainment (more easily understood — like cup holders — by consumers).

The biggest ticket item, however, is AI.

Expect tech suppliers to churn out more press releases citing “deep learning,” or even “Large Language Models (LLMs).” Such AI and autonomy narratives are the vernacular of Tesla wannabes.

General Motors CEO Mary Barra said late October that she wants people to see GM as a “tech company.”

Her statement was made a month before GM decided to “right-size” its accident-prone robotaxi company Cruise, whose driving permit was revoked by California’s Department of Motor Vehicles in November. It’s unclear what Barra meant by “tech company,” and if GM ever becomes a tech company like Tesla.

It’s important to note that the use of AI is common not just in fully autonomous vehicles, but also in ADAS vehicles that can steer and/or accelerate automatically. They embed artificial intelligence in the form of machine learning.

Missy Cummings wrote in a paper that several high-profile Tesla crashes “highlighted both the brittleness of machine learning-enabled ADAS systems and the debate about if and how much testing such systems should undergo before widespread deployment.”

In 2024, everyone should come clean, by acknowledging that AI, as of now, isn’t necessarily making cars drive more safely.

As Cummings pointed out, in AI-driven cars, “the underlying computer vision systems struggled to accurately capture a dynamic world model, and the driver monitoring systems also failed to detect driver disengagements, leading to several fatalities.”

As described in that paper, Cummings conducted a set of increasingly complex tests, by using three Tesla Model 3s on a highway and a closed track to test road departure and construction zone detection capabilities. The results showed “extreme variability” – in-car and between cars – in “performance of the underlying artificial intelligence and computer vision systems.” She added, “In some cases, the cars seemed to perform the best in the most challenging driving scenarios (navigating a construction zone) but performed worse on seemingly simpler scenarios like detecting a road departure.”

The villain here is not necessarily AI. The bigger issue is the tech industry’s reluctance to discuss safeguards that must be there when autonomy fails.

The case of Ambarella

Ambarella is among many semiconductor companies – along with Nvidia and Qualcomm – doubling down on AI. Its recently unveiled full AV software stack, able to process neural networks for perception, sensor fusion and path planning, is a marked departure from the company’s own, previously launched, perception-only processors.

Ambarella’s integrated approach, and its new family of processors co-developed and tightly integrated with its own AI stacks places the company on a similar footing with Mobileye. Ambarella could potentially compete with Qualcomm and even Nvidia, “depending on the OEM approach and the level of performance being sought,” observed TechInsights’ Riches.

Ambarella tries to separate itself from Mobileye, though. Senya Pertsel, senior director of automotive, said Ambarella “provides a flexible platform, not a black box.” It fully supports AI algorithms from OEMs, tier ones and third parties, in addition to its own.

Acknowledging “a high desire for differentiation among the OEMs, Pertsel explained that “not every OEM is willing to tackle every component of the automated driving (AD) stack.”

Baffling about Ambarella, however, is the claim that its hardware can run transformer models in neural networks.

Asked how exactly large language models (LLMs) such as transfer can help automated driving, Pier Paolo Porta, marketing director at VisLabs, part of Ambarella, told us that a transformer can add “global context” to “various fields of view” captured by different cameras crossing multiple planes.

For example, transfomer helps offer a “bird’s eye view,” he said, when an overtaking car encroaches from the rear.

Asked about LLMs, Riches said, “There has been a lot of research in the last year or so in this field, and to my non-expert’s eye, it does look promising, although far from proven.”

He added, “Companies like Ghost Autonomy see LLMs as being suitable for addressing the ‘long-tail’ of driving scenarios and provide human-like reasoning and understanding of rare and unusual events.”

Riches cited an academic paper entitled “Language MPC (Model Predictive Control): Large Language Models as Decision Makers for Autonomous Driving.”

While noting that the paper discusses the “commonsense reasoning capabilities of LLMs,” he cautioned that “a lot of the research is still in academia.” Ambarella, Riches added, “is supporting this type of AI model because they do have a claim to be prepared for what the future may bring.”

The reality of today’s tech market is that invoking AI, especially new neural networks such as LLMs, sells chips, thrills the press and boosts stock prices. While it’s not fair to fault chip companies for getting ahead of themselves, never forget that overselling an AI utopia could lead to disillusionment with the practical value of ADAS.

Bottom line

Automakers will promote a lot of automation features in ADAS models in 2024. Expect both technology suppliers and OEMs to cloud issues of safety and convenience in automation by conflating reality with effects that are still under study.

Junko Yoshida
Editor in Chief, The Ojo-Yoshida Report


This article was published by the The Ojo-Yoshida Report. For more in-depth analysis, register today and get a free two-month all-access subscription.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top