Helm.ai
We license AI software throughout the L2-L4 autonomous driving stack, perception, intent modeling, path planning, and vehicle control. Highest accuracy perception and intent prediction, leading to safer autonomous driving systems. Unsupervised learning and mathematical modeling, instead of supervised learning, allow learning from huge datasets. Our technologies are up to several orders of magnitude more capital-efficient, enabling much lower cost of development. Helm.ai full scene vision-based semantic segmentation fused with Lidar SLAM output from Ouster. L2+ autonomous driving with Helm.ai across highways 280 to 92 to 101, lane-keeping + ACC lane changes. Helm.ai pedestrian segmentation, with key-point prediction. Pedestrian segmentation and keypoint detection. Rain lane detection corner cases and Lidar-vision fusion. Full scene semantic segmentation, botts dots, and faded lane markings.
Learn more
MORAI
MORAI offers a digital twin simulation platform that accelerates the development and testing of autonomous vehicles, urban air mobility, and maritime autonomous surface ships. Built with high-definition maps and a powerful physics engine, it bridges the gap between real-world and simulation test environments, providing all key elements for verifying autonomous systems, including autonomous driving, unmanned aerial vehicles, and unmanned ship systems. It provides a variety of sensor models, including cameras, LiDAR, GPS, radar, and Inertial Measurement Units (IMUs). Users can generate complex and diverse test scenarios from real-world data, including log-based scenarios and edge case scenarios. MORAI's cloud simulation allows for safe, cost-effective, and scalable testing, enabling multiple simulations to run concurrently and evaluate different scenarios in parallel.
Learn more
Apollo Autonomous Vehicle Platform
Various sensors, such as LiDAR, cameras and radar collect environmental data surrounding the vehicle. Using sensor fusion technology perception algorithms can determine in real time the type, location, velocity and orientation of objects on the road. This autonomous perception system is backed by both Baidu’s big data and deep learning technologies, as well as a vast collection of real world labeled driving data. The large-scale deep-learning platform and GPU clusters. Simulation provides the ability to virtually drive millions of kilometers daily using an array of real world traffic and autonomous driving data. Through the simulation service, partners gain access to a large number of autonomous driving scenes to quickly test, validate, and optimize models with comprehensive coverage in a way that is safe and efficient.
Learn more
Kodiak Driver
Kodiak AI’s technology centers on the Kodiak Driver, a unified autonomous driving platform that combines advanced AI-powered software with modular, vehicle-agnostic hardware to enable scalable, real-world autonomy for trucks and ground vehicles. Designed to integrate seamlessly across different vehicle types and operating conditions, the system uses a suite of sensors, housed in field-swappable SensorPods for full 360° perception, deep-learning based perception models to interpret complex environments, forward planning to anticipate changes in the road ahead, and redundant compute, power, steering, and braking systems engineered for safety and reliability in demanding use cases. It supports deployment in commercial long-haul trucking, industrial logistics, and defense ground vehicles, with connectivity and telematics enabling over-the-air updates, remote fleet management, and Assisted Autonomy capabilities that allow human oversight.
Learn more