fbpx

How Intel is Advancing Human and AI Collaboration

This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel.

In writing about the world-renowned guests on the Intel on AI podcast, I’ve mostly covered innovators outside of our company. In this blog, however, I’m pleased to be highlighting the work happening right here at Intel. In a recent episode of the Intel on AI podcast with New York Times best-selling author Abigail Hing Wen, you can listen to Lama Nachman, Intel Fellow and Director of Anticipatory Computing Lab, and Hanlin Tang, former Sr. Director of the Intel AI Lab, talk about some of the life-changing projects their teams are working on today.

Lama runs a multi-disciplinary research lab that spans ethnography design and technology, focusing on bringing artificial intelligence (AI) into different aspects of human lives. Hanlin’s lab is focused on deep learning from a research and engineering perspective, including building popular open source libraries such as Distiller and NLP Architect.

In the podcast, both Lama and Hanlin echo some of the ideas that UC Berkeley roboticist Pieter Abbeel and MIT professor Bernhardt Trout have talked about previously, specifically that general machine intelligence requires more than simply being able to solve carefully prepared problems within a narrow domain. Current AI models are rather like the power tools in the little workshop below my office: tools like my electric-motor powered circular saw, drill, and orbital sander are each very powerful and make me much more productive, but they are very specialized to particular tasks. In recent years, my kids have gotten big enough to join me at the workbench, displaying a versatility that my tools lack. A single child can saw, hammer, and sand quite independently or help me set up clamps for a complicated gluing job. Although a little of their knowledge comes from instruction, they’ve mostly learned just by watching me and trying small tasks themselves without any special effort on my part to create a training curriculum. This is something that my power tools do not do!

This general-purpose learning ability is so commonplace that if you aren’t working in AI, it is completely unremarkable. If you are working in AI, it is a miracle of intelligence beyond the grasp of any current machine. Those carpentry projects with the kids are a model for where we want AI to get to in another sense. AI isn’t about replacing people, any more than my power tools are there to replace me, but to augment and assist.

“We’re not trying to automate these people out of the loop. We’re actually trying to enable them with tools that enable their voice to come out.”
-Lama Nachman

Working with Stephen Hawking and Peter Scott Morgan

One of the most beloved projects here at Intel was working with world famous theoretical physicist Stephen Hawking. As Lama describes in the podcast, a person who can control their communication machine is able to shape their world, something too many of us might take for granted. Creating a system to help someone communicate cannot be solved with a one-size-fits-all approach. In the case of Stephen Hawking, a simple speech recognition system couldn’t work since a tracheotomy permanently removed his ability to speak. Lama and her team needed to build a system entirely driven by the one muscle over which Hawking retained some control—a small muscle in his cheek.

Driving all interactions on a machine from a single signal that allows a person to do everything, from writing documents to surfing the web to giving lectures, with the equivalent of pushing one button is no small task, and limited bandwidth made speech and typing painfully slow. When Lama and her team first met Stephen, the prototype system built by his graduate assistant allowed him to write about one word per minute. Lama introduced predictive language into the system, giving Hawking the ability to write and speak much faster.

This predictive text feature evolved in the system created for roboticist Peter Scott Morgan so that instead of typing out an entire reply, the AI-based program provides a series of responses that can be selected in order to communicate quickly. The hope is that someday the AI system, using models like GPT-3, can listen to the entire conversation in real-time and provide its user with personalized responses. This is, as you might guess, easier to imagine than implement; subtle problems emerge from the conjunction of interaction design and active learning. For example, if a user selects an imperfect response for the sake of expediency, the AI will give itself a virtual reward and make the same slightly-wrong suggestion the next time it encounters a similar context, rather like a host who, mistaking politeness for enthusiasm, serves their unfortunate guest the same “treat” on each successive visit.

“How do we take natural language, the most popular and expressive communication medium that humans excel at, and format it into something that’s digestible to AI systems and that AI systems can reason over?”
– Hanlin Tang

Collaborating with DARPA, Brown University, and Rhode Island Hospital

Brain-machine interfaces have the ring of science-fiction, but this is just what researchers are working on to restore the ability of patients with spinal cord injuries to walk again. As Hanlin explains, surgeons from Rhode Island Hospital implant electrodes around a spinal cord injury site and use machine learning decode the intent coming from the brain’s signal. The system then learns how to stimulate the post-injury site in order to active the correct muscles. As in the systems used by Hawking and Morgan, there are substantial obstacles to solving this problem, on the sensor level in terms of compute (walking is a real-time activity and milliseconds matter) and in terms of reliability and algorithmic verification. How do you test comprehensively when homes, footpaths, and workplaces present such a variety of textures, gradients, and discontinuities?

As the team at Intel focuses on how to solve the challenges of compute, the team at Thomas Serre’s Lab at Brown University is working on the personalized algorithmic challenges. Every patient has unique injuries at different parts of the spinal cord with their own specialized neural structure for sending out a command. This research is still very much in the early stages and won’t be implemented in human beings until a long list of potential safety problems are resolved. For example, researchers are working with the Partnership on AI on areas such as anomaly detection. Having watched neurological conditions steal the ability to walk from both my parents, it is extremely heartwarming to see the energy and determination of these researchers.

The Future of Human and AI Collaboration

Both Lama and Hanlin are optimistic about the future of AI being able to tackle issues around massive data sets, speed, and accuracy, gradually taking the routine drudgery out of our lives. (I will cheerfully surrender my ironing board to the robots) However, both also caution that simply throwing very large deep neural networks at every problem is not the answer. Instead, they see areas like interoperability (machines working smoothly with humans) and explainability (giving humans clear insight into machine decisions) being key to the future of the field. In traditional software, although unforeseen interactions between modules or systems are common, tools like log analyzers, profilers, and debuggers means defects are usually extremely transparent, especially with high automated test coverage becoming a more recent trend. AI is exactly the reverse: it begins with automated tests, and we are still in infancy of maturing the equivalent of debuggers and profilers.

My blog is a mere shadow of this episode, and you should really listen to the whole thing: at about half an hour each, I’ve found Abigail and her guests to be great company during solitary lockdown runs and cycles. You can hear this episode and over a dozen others featuring world-renowned guests at: intel.com/aipodcast

For more about Intel’s research in a variety of fields, visit: https://intel.com/labs

Edward Dixon
Data Scientist, Intel

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top