Mentium’s co-processor technology enables OEMs to seamlessly add the largest DNN models into their systems with the easiest and lowest-cost development cycles, thereby enabling the fastest growth with many new AI-enabled systems. End users enjoy the most accurate inference results and power-efficient operation, along with the lowest operating costs by avoiding expensive Internet services. Our hybrid analog and digital co-processor enables cloud-quality inference at the edge, with <0.5w power consumption, 10x higher speed and lower latency, and using 10x larger neural network models compared to other edge AI semiconductor technologies. For the first time, edge devices can run large neural networks fast and efficiently for rapid reactions in mission-critical applications.
Mentium Technologies
