NVIDIA has launched Level 2+ automated driving system NVIDIA DRIVE AutoPilot that integrates multiple AI technologies to enable supervised self-driving vehicles. Drive autopilot is the world’s first commercially available Level 2+automated driving system that offers intelligent cockpit assistance and visualisation capabilities for vehicle manufacturers to develop advanced automated driving features, the company noted.
DRIVE AutoPilot integrates NVIDIA Xavier™ system-on-a-chip (SoC) processors and NVIDIA DRIVE software to process many deep neural networks (DNNs) for high-performance. This combination enables full self-driving autopilot capabilities, including highway merge, lane change, lane splits and personal mapping. Inside the cabin, features include driver monitoring, AI co-pilot capabilities and advanced in-cabin visualisation of the vehicle’s computer vision system, the company noted.
DRIVE AutoPilot addresses the limitations of existing Level 2 ADAS systems, which a recent Insurance Institute for Highway Safety study showed offer inconsistent vehicle detections and poor ability to stay within lanes on curvy or hilly roads, resulting in a high occurrence of system disengagements where the driver abruptly had to take control. Central to NVIDIA DRIVE AutoPilot is the Xavier SoC, which delivers 30 trillion operations per second of processing capability. Xavier has been designed with six types of processors and 9 billion transistors to enable it process huge data in real time. Xavier is the world’s first automotive-grade processor for autonomous driving and is in production today. DRIVE AutoPilot is part of the NVIDIA DRIVE platform, which is being used by various companies worldwide to build autonomous vehicle solutions. The new Level 2+ system complements the NVIDIA DRIVE AGX Pegasus system that provides Level 5 capabilities for robotaxis.
DRIVE AutoPilot software stack also integrates DRIVE AV software for handling challenges outside the vehicle, as well as DRIVE IX software for tasks inside the car. DRIVE AV uses surround sensors for full, 360-degree perception and features highly accurate localisation and path-planning capabilities. These enable supervised self-driving on the highway, from on-ramp to off-ramp. Going beyond basic adaptive cruise control, lane keeping and automatic emergency braking, its surround perception capabilities handle situations where lanes split or merge, and safely perform lane changes. DRIVE AV also includes a diverse and redundant set of advanced DNN technologies that enable the vehicle to perceive a wide range of objects and driving situations, including DriveNet, SignNet, LaneNet, OpenRoadNet and WaitNet. This sophisticated AI software understands where other vehicles are, reads lane markings, detects pedestrians and cyclists, distinguishes different types of lights and their colors, recognises traffic signs and understands complex scenes.
Rob Csongor, vice president of Autonomous Machines at NVIDIA, said a full-featured Level 2+ system requires significantly more computational horsepower and sophisticated software than what is on the road today. NVIDIA DRIVE AutoPilot makes it possible for carmakers to quickly deploy advanced autonomous solutions by 2020.