Ambarella Reveals 4D Imaging Radar Architecture for Autonomous Driving
Ambarella will showcase the technology at CES 2023 in Las Vegas.
Ambarella said the new technology will help AVs traverse rainy conditions.
Ambarella has revealed a new architecture that uses 4D imaging technology to help autonomous vehicles navigate.
Robotics 24/7 Staff
· December 6, 2022
Ambarella Inc today announced the launch of its 4D imaging radar architecture, which allows both central processing of raw radar data and deep, low-level fusion with other sensor inputs—including cameras, lidar and ultrasonics.
This architecture provides greater environmental perception and safer path planning in AI-based advanced driver assist (ADAS) and L2+ to L5 autonomous driving systems, as well as autonomous robotics, according to the Santa Clara, Calif.-based company.
This new centralized architecture will be demonstrated at Ambarella’s invitation-only event taking place during CES.
Built with Oculli radar technology
It features Ambarella’s Oculii radar technology, including AI software algorithms that dynamically adapt radar waveforms to the surrounding environment—providing high angular resolution of 0.5 degrees, an ultra-dense point cloud up to 10s of thousands of points per frame and a long detection range up to 500+ meters.
All of this is achieved with an order of magnitude fewer antenna MIMO channels, which reduces the data bandwidth and achieves significantly lower power consumption than competing 4D imaging radars, it claimed.
Ambarella’s centralized 4D imaging radar with Oculii technology provides a flexible and high-performance perception architecture that enables system integrators to future proof their radar designs.
“There were ~100M radar units manufactured in 2021 for automotive ADAS,” said Cédric Malaquin, team lead analyst of RF activity at Yole Intelligence, part of Yole Group. “We expect this volume to grow 2.5-fold by 2027, given the more demanding regulations on safety and more advanced driving automation systems hitting the road. Indeed, from the current 1-3 radar sensors per car, OEMs will move to 5 radar sensors per car as a baseline (1).
“Besides, there is an exciting debate on the radar processing partitioning and many developments associated,” Malaquin added. “One approach is centralized radar computing that will enable OEMs to offer significantly higher performance imaging radar systems and new ADAS/AD features while simultaneously optimizing the cost of radar sensing.”
To create this new architecture, Ambarella optimized the Oculii algorithms for its CV3 AI domain controller SoC family and added specific radar signal processing acceleration. The CV3’s performance per watt offers the high compute and memory capacity needed to achieve high radar density, range, and sensitivity.
Additionally, a single CV3 can efficiently provide high-performance, real-time processing for perception, low-level sensor fusion, and path planning, centrally and simultaneously, within autonomous vehicles and robots.
“No other semiconductor and software company has advanced in-house capabilities for both radar and camera technologies, as well as AI processing,” said Fermi Wang, president and CEO of Ambarella.
“This expertise allowed us to create an unprecedented, centralized architecture that combines our unique Oculii radar algorithms with the CV3’s industry-leading domain control performance per watt to efficiently enable new levels of AI perception, sensor fusion and path planning that will help realize the full potential of ADAS, autonomous driving and robotics, “he added.
New architecture reduces complexity and increases power efficiency
The data sets of competing 4D imaging radar technologies are too large to transport and process centrally. They generate multiple terabits per second of data per module, while consuming more than 20 watts of power per radar module, due to thousands of MIMO antennas used by each module to provide the high angular resolution required for 4D imaging radar.
That is multiplied across the six or more radar modules required to cover a vehicle, making central processing impractical for other radar technologies, which must process radar data across thousands of antennas.
By applying AI software to dynamically adapt the radar waveforms generated with existing monolithic microwave integrated circuit (MMIC) devices, and using AI sparsification to create virtual antennas, Oculii technology reduces the antenna array for each processor-less MMIC radar head in this new architecture to 6 transmit x 8 receive.
Overall, the number of MMICs is drastically reduced, while achieving an extremely high 0.5 degrees of joint azimuth and elevation angular resolution.
Additionally, Ambarella’s centralized architecture consumes significantly less power, at the maximum duty cycle, and reduces the bandwidth for data transport by 6x, while eliminating the need for pre-filtered, edge processing and its resulting loss in sensor information.
Architecture accounts for dynamic conditions
This cost-effective, software-defined centralized architecture also enables dynamic allocation of the CV3’s processing resources, based on real-time conditions, both between sensor types and among sensors of the same type.
For example, in extreme rainy conditions that diminish long-range camera data, the CV3 can shift some of its resources to improve radar inputs.
Likewise, if it is raining while driving on a highway, the CV3 can focus on data coming from front-facing radar sensors to further extend the vehicle’s detection range while providing faster reaction times. This can’t be done with an edge-based architecture, where the radar data is being processed at each module, and where processing capacity is specified for worst-case scenarios and often goes underutilized.
CV3 marks the debut of Ambarella’s next-generation CVflow architecture, with a neural vector processor and a general vector processor, which were both designed by Ambarella from the ground up to include radar-specific signal processing enhancements.
Faster than ever
These processors work in tandem to run the Oculii advanced radar perception software with far higher performance, including speeds up to 100x faster than traditional edge radar processors can achieve.
Additional benefits of this new centralized architecture include easier over-the-air (OTA) software updates, for continuous improvement and future proofing. In contrast, each edge radar module’s processor must be updated individually, after determining the processor and OS being used in each; whereas a single OTA update can be pushed to the CV3 SoC and aggregated across all of the system’s radar heads.
These radar heads eliminate the need for a processor, which reduces costs for both the upfront bill of materials and in the event of damage from an accident (most radars are located behind the vehicle’s bumper).
Additionally, many of the edge-processor radar modules deployed today never receive software updates because of this software complexity.
Target applications for the new centralized radar architecture include ADAS and level 2+ to level 5 autonomous vehicles, as well as autonomous mobile robots (AMRs) and automated guided vehicle (AGV) robots.
These designs are streamlined by Ambarella’s unified and flexible software development environment, which provides automotive and robotics designers with a software-upgradable platform for scaling performance from ADAS and L2+ to L5.
Introducing the new CV3 domain controller SoC family.
Stay up-to-date with news and resources you need to do your job.
Research industry trends, compare companies and get market intelligence every week with Robotics 24/7.
Subscribe to our robotics user email newsletter and we'll keep you informed and up-to-date.
Ambarella said the new technology will help AVs traverse rainy conditions.