fbpx

Opteran’s Journey to Complete Natural Intelligence for Machines

This blog post was originally published by Opteran Technologies. It is reprinted here with the permission of Opteran Technologies.

The start-up years – how Opteran is laying the right technology foundations for success

It’s been several months now since I’ve joined Opteran as Chairman and it has been exciting to get to grips with the technology and meet the team. It has reminded me of the start-up years when we founded Movidius (since acquired by Intel) and some of the key learnings we had to take onboard rapidly as we scaled the company. At this early stage a start-up must have ambition, but it must also lay the right foundations combining versatility with a level of structure that ensures focus, but still empowers the team to have the autonomy to innovate.

Getting beyond the limits of neural networks

What attracted me to Opteran’s proposition, technology and the team is that I see all the right foundations being put in place. There is clearly ambition to develop a radically new solution to the limitations of today’s deep learning technologies. It is a big task, but the rewards for getting it right are a huge addressable market. I spent ten years working with neural networks and saw their drawbacks first hand. They are brittle, very opaque and it is difficult to understand how they work.

Opteran Natural Intelligence is more than a compelling narrative about mimicking nature to deliver more efficient autonomy. Digging deeper into the technology it gets away from how current autonomous systems are controlled. Natural Intelligence has the promise of versatility to deal with unstructured and unpredictable situations in a more effective, elegant way.

It is also being built in a structured way with a toolchain and underlying validation framework that are critical if the company is to have velocity moving forward. Opteran Natural Intelligence is made up of four main ‘insect bio-inspired’ component parts – Opteran See, Sense, Direct and Decide.

Opteran See provides machines with electronically-stabilised 360 degree panoramic vision using only two low cost 2D CMOS cameras. This is in contrast to other systems that require multiple Depth sensor modules to be placed around both large and small form-factor robot platforms. Opteran is currently developing functionality to employ a range of visual cues including Depth from motion using Honeybee based Optic Flow and Praying Mantis bio-inspired Stereopsis depth. This provides fully stabilised 360 FoV to replace 4 or 5 stereo depth modules with just 1 omni-directional Opteran module at a fraction of weight and power.

Opteran Sense offers state estimation (orientation, velocity and depth) plus collision detection and avoidance, from Optic Flow motion detection. There is also a clear roadmap for future developments.

In Q4 the company is releasing Opteran Direct, which is Topological Simultaneous Localisation and Mapping (SLAM) that works like brains do and is robust to the problems faced by today’s feature extraction approaches, such as aliasing, as well as being much more computationally efficient.

Opteran Decide will follow in 2022 and will provide decision making for autonomous machines. Decision making that functions as brains do without the need for rule-based systems or deep reinforcement learning.

Achieving Complete Natural Intelligence

Looking ahead, as the company works with major customers, it is clear there is an enormous opportunity for solutions that fit within the tough size, weight, power and performance constraints of markets such as industrial and commercial drones and ground-based robots. Opteran has a whole roadmap of technology focused on full machine autonomy beyond sensing, moving and navigating to enable machines to understand and interact with the world more naturally. For those companies who are prepared to go on this journey with them, the technology has the versatility to open up this new approach and to achieve the “holy grail” of true machine autonomy – or what Opteran calls complete Natural Intelligence. I’m very excited to be onboard and watch this space, because we have a number of events and announcements coming up in the next few months!

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top