fbpx

CES 2019: Searching for AI and Robotics Innovation

CES1

This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.

The Consumer Electronics Show (CES) in Las Vegas is always a good barometer for measuring how the latest technologies are being infused into real consumer products and observing some of the innovations coming down the pipeline. The focus of CES has shifted in the past few years with themes like artificial intelligence (AI), the Internet of Things (IoT), virtual reality (VR)/augmented reality (AR), wearables, and autonomous cars proliferating. This year, some themes like AI, IoT, and smart home were much more pronounced, while others like VR and wearables seemed hushed by comparison.

CES 2019 Was More Show and Less Substance

Yes, CES is meant to be a publicity generating event, with the world’s technology press, buyers, and retailers in attendance, but for some reason, the messaging and the exhibits felt excessive and a bit over the top. This was especially true for AI, which was plastered almost everywhere—on TVs, smart toilets, and smart razors—with the marketing hype going into overdrive.

In previous years, we have seen some genuine innovative shifts occur, especially in relation to AI and robotics. One had the distinct sense that we were moving into a new era of intelligent computing, with the emergence of smart sensors, smart cameras, robot assistants, and autonomous cars.

This year, it felt like it was more of the same, with bigger and grander exhibits and many “me-too” products, but in terms of innovation, there was nothing substantial to be excited about from a product standpoint. A good example of excessive show and less substance was the 78-foot luxury yacht at the Furrion booth in the North Hall that had a helipad, drone surveillance, and smart mirrors, in addition to an oversized virtual concierge on board called Angel.

AI and robotics had a combined marketplace at the show, which was much bigger this year, but offered little in terms of real innovation. For example, many companies were offering underwater submersible robots, which are essentially remote-controlled robots with a camera attached to the front, but none were using AI for autonomy or object recognition (e.g., recognizing fish or marine life). There was a company offering a robotic self-cleaning cat litter solution, which one could argue hardly qualifies as a robot because it is simply a box with an automated rake at the bottom. But CES is all about showcasing the weird solutions and this year had plenty to offer.

Although one could berate CES for falling short on innovation, there can be important takeaways on the state of the industry. We are now in the deployment and optimization phase for AI and robotics, as the technologies mature from the component and technology vendor perspective, and original equipment manufacturers (OEMs) start to deploy these in limited numbers to consumers. True genuine innovation might take a while, with highly immersive AR and VR, Level 5 autonomy, or holographic displays. Until then, we are counting on innovative designers and product teams to find compelling and genuine use cases for AI within consumer electronics, which goes beyond voice assistants.

AI for Consumers Is Stuck at Voice Assistants

Last year, Amazon and Google were battling it out at CES for marketing dollars, but this year, Google seems to have won the battle. Voice assistants, including Google Assistant and Amazon Alexa, were scattered across the show floor, with varied product integrations for the car, TV, oven, lightbulbs, and toilets, among many others. The battle of the voice assistants was much more prominent this year, with Amazon having a dedicated booth for Alexa Auto, along with Alexa on display in other smaller booths, while the bigger and brasher Google had a theme park ride exploring the various use cases for Google Assistant. Amazon announced that it has 100 million Echo devices deployed worldwide, while Google announced that it has 10X more than that, due to its Android smartphone deployments on which Google Assistant is a default app.

Just a few years ago, both Google and Amazon were nowhere to be seen at CES, but the rise of AI and voice assistants has led the internet giants to invest marketing dollars at CES. The goal is to dominate the voice interface and be integrated into almost every conceivable device and object within the home. But do we really need a voice assistant in every corner of our home and office? Does every device need to be connected to Amazon Alexa or Google Assistant? As someone who has been using Amazon Alexa in the home for the past few years, I really do not see the value in having each and every device become a conversational or control interface. This is especially true in a house with kids, where having everything connected to a voice interface can become a nightmare to deal with, not to mention the privacy implications! We are clearly in the midst of the hype cycle, as we start to find meaningful and practical uses for voice assistants.

Yes, the technical capabilities of voice assistants will improve, becoming more contextual, improving on accuracy, able to differentiate between speakers, expanding interpreter and translation capabilities, and, at some point, having differentiated personalities and voice profiles, thanks to mobile-integrated Google Duplex. But translating those impressive AI capabilities into consumer products will take some time to mature.

Automotive AI Shifts toward the In-Car Experience

The North Hall at the Las Vegas Convention Center (LVCC) has become a mini auto show in the last few years. Car audio and speaker companies are slowly being pushed out of the North Hall, as the future of autonomous mobility starts to become reality. All the big car manufacturers, including Mercedes, Audi, Nissan, Kia, Ford, and Hyundai, have been showing their concepts for autonomous mobility over the last few years. This year was no different, with the exception of Bell Helicopter showing its Bell Nexus vertical take-off and landing (VTOL) air taxi. While the current version being developed by Bell is not autonomous, future versions are likely to have autonomous capabilities like the Ehang 184 drone shown at CES a few years ago.

The big shift this year was auto and component manufacturers focusing on L2, L4, and L5 Society of Automotive Engineers (SAE) autonomous capabilities, with L3 not being seen as a viable option due to the issues with driver handoff. L2 is where advanced driver assistance systems (ADAS) and safety become paramount, a trend we noticed Toyota pushing last year. Simulation companies and product information management (PIM) tool companies that work with auto manufacturers in developing autonomous cars see increased activity around L2, as that looks like a more near-term viable proposition bringing in revenue.

However, the bigger shift was away from autonomy overall and focusing on the in-car experience. Autonomy is a hard engineering problem and, to some extent, it is a regulatory issue that should be ironed out over the next few years; however, there is significant focus now on passenger safety, passenger comfort, and taking a slice of the data pipeline that is emerging within the automotive industry. Automotive manufacturers and mobility and fleet providers see a big opportunity in becoming data monopolies, especially when it comes to in-car telematics data and car performance data (braking, tire pressure, engine revolutions, gasoline consumption, etc.), passenger control, and preference data (seat adjustment, temperature control, etc.). As full-screen dashboards and curved screens become possible in cars, it is all about how you can engage the passenger and let them enjoy content as they move about autonomously or semi-autonomously. Qualcomm showed some amusing demos around how it visualizes the future in-car experience in partnership with Amazon Alexa, leveraging the newly announced Snapdragon Automotive Cockpit Platform. It is clear that the auto manufacturers and the automotive ecosystem will have to work with internet hyperscalers like Google and Amazon, especially on integrating consumer services, but also with financial institutions and banks to make commercial services available through the car.

Let us not forget the startups that are innovating in this space, such as Dashero.ai, which is partnering with Ford to embed its application into the car dashboard that is essentially an e-commerce platform for the car using AI and machine learning (ML) to place grocery orders during your commute, so you can pick them up en route, rather than park and go into the store. Another interesting startup is Guardian, which uses an inexpensive 2D camera and AI algorithms within the car to offer a host of features like driver distraction, baby seat detection and safety, and intruder alerts, among others. These are both great examples of using AI in novel and innovative ways to solve real issues.

The Memo on Privacy Has Not Yet Reached CES

The elephant in the room for consumer AI is privacy. The issue came to the forefront in 2018 as Facebook and Google were both questioned by the U.S. Congress about how they collect and use consumer data. The power and influence of internet hyperscalers like Google and Facebook is unprecedented, as they have built AI-driven business models that rely on collecting user data to provide personalized services, whether it is restaurant recommendations or targeted ads. Their hyperscale infrastructure includes large-scale data centers and specialized AI hardware, combined with Big Data and top AI talent, which means these companies now have a level of power that many governments crave. We are now in a new era of data monopolies, the likes of which we have never seen, and in the AI age, data monopolies will be faced with multiple questions about data privacy and the way data is shared and exchanged with other companies. While consumers are hooked on services and applications like Google, Facebook, WhatsApp, and Instagram, they are entering into a Faustian bargain in exchange for convenience. We have yet to scratch the surface on the privacy issues surrounding voice assistants, which is an area where companies like Amazon and Google have not specifically revealed how much information is being collected or being passed on to third parties.

Despite privacy being at the forefront during 2018, it was surprising to see that it was barely brought up at CES 2019, whether it concerns voice assistants, facial recognition, or even smart cameras in the home. It is clear that General Data Protection Regulation (GDPR)-like regulations are being put on the table in the United States, and the internet companies are on board, which makes one wonder why there is not a rush for new innovative solutions that put AI privacy first. As mentioned in Tractica’s 2019 AI predictions, we expect to see “AI first” companies giving way to “AI privacy first.” It would be even more surprising to see if privacy is not at the forefront at the 2020 CES, when Google and Amazon announce their “AI privacy first” agendas. In other news, Apple made a subtle gibe at Google with a massive billboard at CES with its privacy first agenda. Watch this space!

Momentum Building for AI at the Edge

On a note related to AI and privacy, AI at the edge is gaining momentum. Although AI at the edge was not plastered across marketing billboards, one could spot examples of AI at the edge at several vendors’ booths, including Qualcomm, Intel, Mentor Graphics, STMicroelectronics, NXP, Baidu, and John Deere, among others. Google had its newly released Edge tensor processing unit (TPU) development kit at the NXP stand, where it was showcasing some basic applications, with plans to start shipping to select developers this year.

While there are varying definitions of the edge, for the most part, it refers to the edge device, rather than the edge cloud. The edge cloud is a concept that has yet to take shape, suggesting the use of routers or boxes placed at the network edge close to the device where AI processing could be performed. The concept of AI at the edge device is where AI is already being implemented from mobile phones, cameras, drones, and robots. Tractica’s AI for Edge Devices report focuses in detail on the device categories for edge AI, drivers and challenges, and key players of this market, estimating the edge AI chipset opportunity to be more than $50 billion by 2025.

There were several interesting examples of AI at the edge, including anomaly detection, image classification, face recognition, smart city cameras, and smart agriculture, to name a few. While privacy is one of the main drivers for AI at the edge in consumer devices, especially for smart speakers, smart tractors, which are using object classification of weeds, and autonomous cars, where object classification must be done in split seconds, latency and bandwidth connectivity are the main issues.

On one hand, chipset vendors are pushing for improved performance on lower-power processors to perform AI at the edge, but at the same time, we are also seeing software and embedded solutions that help developers compress large AI models into resource-constrained hardware.

Retail AI Moving from Autonomous Stores to Autonomous Stores on Wheels

Expect to see a lot of activity and noise around AI in retail in 2019, as laid out in Tractica’s white paper on 2019 AI Predictions. The theme of autonomous stores was very evident at CES, with mainly Chinese and Japanese companies like Suning, Panasonic, and JD.com offering smart retail solutions like autonomous self-checkout stores, emotion recognition for retailers, and customer traffic statistics collection all using AI-based computer vision and deep learning solutions.

One interesting trend was taking the autonomous store to the next level by placing it on wheels. Panasonic showed its SPACe_C modular electric mobility concept, which would allow retailers to take the autonomous self-checkout store and place it on an autonomous electric mobility platform. Robomart showcased its autonomous convenience store on wheels, which is expected to launch commercial pilots in Boston in 2019 in collaboration with Stop & Shop. The SnackBot is a collaboration between Robby Technologies and PepsiCo, which is aimed at delivering healthy snacks, more like a healthy snack vending machine on wheels. The SnackBot, which was making the rounds of CES, can be stopped by waving at it or summoned to a particular location using the app, after which one can pay for the snack using the app itself. The SnackBot is already in a commercial pilot on campus at the University of the Pacific in Stockton, California.

B2B or Enterprise AI Could Be a Dedicated Marketplace at Future CESs

On an optimistic note, there were a few genuine examples of innovation, but rather than coming from consumer products, they came from the business-to-business (B2B) or enterprise sector. John Deere had a massive driverless tractor at its stand showcasing interesting AI-based applications, such as precision agriculture with smart weed spraying nozzles using deep learning and edge AI, along with deep learning-based cameras to perform real-time sorting of the husk from the wheat as the harvester rolls through the field. John Deere’s acquisition of Blue River Technology is starting to bear fruit and its AI-focused stand at CES was a breath of fresh air.

I hope that in the years ahead we start to see many more B2B and enterprise AI-focused companies at CES, possibly in a separate dedicated marketplace. CES has gone beyond consumer electronics and gadgets, so it might need to shift its focus to becoming a “technology show,” with companies showcasing their latest technology innovations as they implement technology in physical robots, machines, and products in general. AI is the next big technology revolution, and we could end up seeing a lot more innovation in this B2B-focused marketplace. Tractica’s AI market forecasts reflect this trend, as we see 90% of the market opportunity coming from the enterprise, rather than the consumer sector.

Aditya Kaul
Research Director, Tractica

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top