NVIDIA DRIVE Map
NVIDIA DRIVE® Map is a multi-modal mapping platform designed to enable the highest levels of autonomy while improving safety. It combines the accuracy of ground truth mapping with the freshness and scale of AI-based fleet-sourced mapping. With four localization layers—camera, lidar, radar, and GNSS—DRIVE Map provides the redundancy and versatility required by the most advanced AI drivers. DRIVE Map is designed for the highest level of accuracy, the ground truth map engine creates DRIVE Maps using rich sensors—cameras, radars, lidars, and differential GNSS/IMU—with NVIDIA DRIVE Hyperion data collection vehicles. It achieves better than 5 cm accuracy for higher levels of autonomy (L3/L4) in selected environments, such as highways and urban environments. DRIVE Map is designed for near real-time operation and global scalability. Based on both ground truth and fleet-sourced data, it represents the collective memory of millions of vehicles.
Learn more
Applied Intuition Vehicle OS
Applied Intuition Vehicle OS is a scalable, modular platform that enables automakers, commercial fleets, and defense integrators to develop, deploy, and update comprehensive vehicle software, hardware, and AI applications across all domains, from ADAS and infotainment to autonomy and digital services. The on-board SDK provides embedded real-time OS, drivers, middleware, and reference compute architecture for safety-critical and consumer‑facing functions, while the off-board platform supports cloud-based data logging, remote diagnostics, OTA vehicle updates, and digital twin management. Developers work within a unified Workbench environment featuring integrated build and testing tools, CI pipelines, and automated validation workflows. It bridges vehicle intelligence across ecosystems by combining autonomy stacks, simulation suites including vehicle dynamics and sensor simulation, and a vibrant developer toolchain.
Learn more
Oxbotica Selenium
Selenium is our flagship product, a full-stack autonomy system, the product of over 500 person-years of effort. An on-vehicle suite of software which given a drive-by-wire interface and very modest compute hardware, brings full autonomy to a land-based vehicle. Selenium has the ability to transform any suitable vehicle platform into an autonomous vehicle, both at prototype volume and at scale. It is a collection of interoperable software modules that allow the vehicle to answer three key questions, where am I? What’s around me? What do I do next? Selenium spans the technological spectrum, from low-level device drivers, through calibration, 4-modal localization, mapping, perception, machine learning and planning, and its remarkable vertical integration even covers user interface and data export systems. It does not even need GPS or HD-Maps (although this can still be utilized, if available).
Learn more
Apollo Autonomous Vehicle Platform
Various sensors, such as LiDAR, cameras and radar collect environmental data surrounding the vehicle. Using sensor fusion technology perception algorithms can determine in real time the type, location, velocity and orientation of objects on the road. This autonomous perception system is backed by both Baidu’s big data and deep learning technologies, as well as a vast collection of real world labeled driving data. The large-scale deep-learning platform and GPU clusters. Simulation provides the ability to virtually drive millions of kilometers daily using an array of real world traffic and autonomous driving data. Through the simulation service, partners gain access to a large number of autonomous driving scenes to quickly test, validate, and optimize models with comprehensive coverage in a way that is safe and efficient.
Learn more