With a vision inference chip that can generate one quadrillion operations per second, upstart Recogni is going after chip-making heavyweights such as NVIDIA. Headquartered in San Jose, Calif., the company makes chips that are used in autonomous vehicles for visual perception.
Recogni said its Vision Cognition Module (VCM) system enables object detection accuracy up to 1,000 m (3,280 ft.) in real time. The onboard chip can also analyze and process multiple streams of data from the autonomous vehicle's high resolution cameras at high frame rates.
The company said the Peto-op class system is able to be powered using only 25W of power. It added it was able to do this by “use of novel logarithmic computation that significantly reduces the compute power consumption, highly optimized acceleration engine particularly for 3×3 convolutions, novel compression scheme to reduce memory data transfers, [and] minimized DRAM accesses.”
It argued that most of its competitors offer systems at sub-50% efficiency.
Recogni is backed by major automotive companies, including BMW, Toyota Ventures, and Bosch. To date, it has raised $74 million.
RK Anand started Recogni back in 2017 and ran it as CEO up until October of last year when he stepped down to become chief product officer. Anand said he planned to help the company focus on taking its technology to market.
Recogni’s new CEO, Marc Bolitho, just started last month and previously worked at ZF Group, another technology company working on self-driving vehicle systems.
Annad recently sat down with Robotics 24/7 to discuss his company’s products and the underling technologies powering them.
Before starting Recogni, RK Annad founded OttoQ, a now defunct parking service technology company. Source: Recogni
Camera technology gives autonomous systems depth
Cameras are some of the biggest enabling technologies behind the company’s vision system, Anand explained.
“We live in a three-dimensional world,” he said. “Anything that is autonomous in nature, whether its cars or factory automation, they have to navigate a three-dimensional world.”
These systems need to have sensory input to help them understand what’s around them, noted Anand. Cameras help provide that data and are of relatively low cost.
Bryon Moyer, an analyst at Tech Insights, observed in Microprocessor Report that Recogni’s philosophy is to “maximize the information available from cameras, since that approach has the biggest impact on inference accuracy.”
To help it make the best chips it can, the company also prioritizes supporting high frame rates and red/clear/clear/blue (RCCB) image sensors, Moyer noted. The company also makes sure to eliminate image single processing from the cameras.
Annad said over the past few months, customers have been trialing its chip inference system. Throughout the process, the system’s AI has been going through validation set testing. Once the chip has gone through testing, the next step is to take it into production.
But the process is a long, he noted. He compared driving on the road to flying an airplane, noting that the challenges are much different.
“On an airplane, we don’t question the safety. We don’t question its reliability. Airplanes obviously operate in a much more open space,” Anand said. “They barely run into each other. But if you’re on the ground, and you have to traverse, now you have a far more complicated environment.”
About the Author
Cesareo Contreras is associate editor at Robotics 24/7. Prior to working at Peerless Media, he was an award-winning reporter at the Metrowest Daily News and Milford Daily News in Massachusetts. Contreras is a graduate of Framingham State University and has a keen interest in the human side of emerging technologies.
Follow Robotics 24/7 on Facebook
Email Sign Up
Get news, papers, media and research delivered
Stay up-to-date with news and resources you need to do your job.
Research industry trends, compare companies and get market intelligence every week with Robotics 24/7.
Subscribe to our robotics user email newsletter and we'll keep you informed and up-to-date.