A computer-implemented method for determining scene-consistent motion forecasts from sensor data can include obtaining scene data including one or more actor features. The computer-implemented method can include providing the scene data to a latent prior model, the latent prior model configured to generate scene latent data in response to receipt of scene data, the scene latent data including one or more latent variables. The computer-implemented method can include obtaining the scene latent data from the latent prior model. The computer-implemented method can include sampling latent sample data from the scene latent data. The computer-implemented method can include providing the latent sample data to a decoder model, the decoder model configured to decode the latent sample data into a motion forecast including one or more predicted trajectories of the one or more actor features. The computer-implemented method can include receiving the motion forecast including one or more predicted trajectories of the one or more actor features from the decoder model.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/02 - Control of position or course in two dimensions
G06F 18/2137 - Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
Systems and methods for providing a vehicle service are provided. In one example embodiment, a computer-implemented method includes receiving data indicative of a service request to provide a vehicle service for an entity with respect to one or more cargo items designated for autonomous transport. The method includes obtaining a first cargo item among the one or more cargo items, from a representative of the entity at a dedicated first transfer hub proximate to a first location associated with the first cargo item. The method includes controlling a first autonomous vehicle to transport the first cargo item from the first transfer hub to a dedicated second transfer hub proximate to a second location associated with the first cargo item. The method includes providing the first cargo item to a representative of the entity at the second transfer hub, to provide the vehicle service.
Systems, methods, and vehicles for taking a vehicle out-of-service are provided. In one example embodiment, a method includes obtaining, by one or more computing devices on-board an autonomous vehicle, data indicative of one or more parameters associated with the autonomous vehicle. The autonomous vehicle is configured to provide a vehicle service to one or more users of the vehicle service. The method includes determining, by the computing devices, an existence of a fault associated with the autonomous vehicle based at least in part on the one or more parameters associated with the autonomous vehicle. The method includes determining, by the computing devices, one or more actions to be performed by the autonomous vehicle based at least in part on the existence of the fault. The method includes performing, by the computing devices, one or more of the actions to take the autonomous vehicle out-of-service based at least in part on the fault.
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
G06Q 10/047 - Optimisation of routes or paths, e.g. travelling salesman problem
G06Q 10/0631 - Resource planning, allocation, distributing or scheduling for enterprises or organisations
G06Q 10/20 - Administration of product repair or maintenance
G07C 5/00 - Registering or indicating the working of vehicles
An autonomous robot is provided. In one example embodiment, an autonomous robot can include a main body including one or more compartments. The one or more compartments can be configured to provide support for transporting an item. The autonomous robot can include a mobility assembly affixed to the main body and a sensor configured to obtain sensor data associated with a surrounding environment of the autonomous robot. The autonomous robot can include a computing system configured to plan a motion of the autonomous robot based at least in part on the sensor data. The computing system can be operably connected to the mobility assembly for controlling a motion of the autonomous robot. The autonomous robot can include a coupling assembly configured to temporarily secure the autonomous robot to an autonomous vehicle. The autonomous robot can include a power system and a ventilation system that can interface with the autonomous vehicle.
Systems and methods for power and thermal management of autonomous vehicles are provided. In one example embodiment, a computing system includes processor(s) and one or more tangible, non-transitory, computer readable media that collectively store instructions that when executed by the processor(s) cause the computing system to perform operations. The operations include obtaining data associated with an autonomous vehicle. The operations include identifying one or more vehicle parameters associated with the autonomous vehicle based at least in part on the data associated with the autonomous vehicle. The operations include determining a modification to one or more operating characteristics of one or more systems onboard the autonomous vehicle based at least in part on the one or more vehicle parameters. The operations include controlling a temperature of at least a portion of the autonomous vehicle via implementation of the modification of the operating characteristic(s) of the system(s) onboard the autonomous vehicle.
The present disclosure provides systems and methods for training probabilistic object motion prediction models using non-differentiable representations of prior knowledge. As one example, object motion prediction models can be used by autonomous vehicles to probabilistically predict the future location(s) of observed objects (e.g., other vehicles, bicyclists, pedestrians, etc.). For example, such models can output a probability distribution that provides a distribution of probabilities for the future location(s) of each object at one or more future times. Aspects of the present disclosure enable these models to be trained using non-differentiable prior knowledge about motion of objects within the autonomous vehicle's environment such as, for example, prior knowledge about lane or road geometry or topology and/or traffic information such as current traffic control states (e.g., traffic light status).
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
G06N 7/01 - Probabilistic graphical models, e.g. probabilistic networks
Systems, methods, tangible non-transitory computer-readable media, and devices for operating an autonomous vehicle are provided. For example, the disclosed technology can include receiving state data that includes information associated with states of an autonomous vehicle and an environment external to the autonomous vehicle. Responsive to the state data satisfying vehicle stoppage criteria, vehicle stoppage conditions can be determined to have occurred. A severity level of the vehicle stoppage conditions can be selected from a plurality of available severity levels respectively associated with a plurality of different sets of constraints. A motion plan can be generated based on the state data. The motion plan can include information associated with locations for the autonomous vehicle to traverse at time intervals corresponding to the locations. Further, the locations can include a current location of the autonomous vehicle and a destination location at which the autonomous vehicle stops traveling.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60T 7/22 - Brake-action initiating means for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B60W 30/095 - Predicting travel path or likelihood of collision
Systems and methods for controlling a failover response of an autonomous vehicle are provided. In one example embodiment, a method includes determining, by one or more computing devices on-board an autonomous vehicle, an operational mode of the autonomous vehicle. The autonomous vehicle is configured to operate in at least a first operational mode in which a human driver is present in the autonomous vehicle and a second operational mode in which the human driver is not present in the autonomous vehicle. The method includes detecting a triggering event associated with the autonomous vehicle. The method includes determining actions to be performed by the autonomous vehicle in response to the triggering event based at least in part on the operational mode. The method includes providing one or more control signals to one or more of the systems on-board the autonomous vehicle to perform the one or more actions in response to the triggering event.
In one example embodiment, a computer-implemented method includes receiving data representing a motion plan of the autonomous vehicle via a plurality of control lanes configured to implement the motion plan to control a motion of the autonomous vehicle, the plurality of control lanes including at least a first control lane and a second control lane, and controlling the first control lane to implement the motion plan. The method includes detecting one or more faults associated with implementation of the motion plan by the first control lane or the second control lane, or in generation of the motion plan, and in response to one or more faults, controlling the first control lane or the second control lane to adjust the motion of the autonomous vehicle based at least in part on one or more fault reaction parameters associated with the one or more faults.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
B60W 50/023 - Avoiding failures by using redundant parts
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
A computing system can input first relative location embedding data into an interaction transformer model and receive, as an output of the interaction transformer model, motion forecast data for actors relative to a vehicle. The computing system can input the motion forecast data into a prediction model to receive respective trajectories for the actors for a current time step and respective projected trajectories for the actors for a subsequent time step. The computing system can generate second relative location embedding data based on the respective projected trajectories from the second time step. The computing system can produce second motion forecast data using the interaction transformer model based on the second relative location embedding. The computing system can determine second respective trajectories for the actors using the prediction model based on the second forecast data.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G06N 3/044 - Recurrent networks, e.g. Hopfield networks
The present disclosure provides a sensor cleaning system that cleans one or more sensors of an autonomous vehicle. Each sensor can have one or more corresponding sensor cleaning units that are configured to clean such sensor using a fluid (e.g., a gas or a liquid). Thus, the sensor cleaning system can include both a gas cleaning system and a liquid cleaning system. According to one aspect, the sensor cleaning system can provide individualized cleaning of the autonomous vehicle sensors. According to another aspect, a liquid cleaning system can be pressurized or otherwise powered by the gas cleaning system or other gas system.
Systems and methods for automatically adjusting the interior cabin of an autonomous vehicle are provided. In one example embodiment, an autonomous vehicle can include a main body including a floor and a ceiling that at least partially define an interior cabin of the autonomous vehicle. The autonomous vehicle can include a partition wall that is movable within the interior cabin of the autonomous vehicle. The partition wall can extend between the floor to the ceiling of the main body. The autonomous vehicle can include a computing system configured to receive data indicative of one or more service assignments associated with the autonomous vehicle and to adjust a position of the partition wall within the interior cabin based at least in part on the one or more service assignments.
G01G 19/12 - Weighing apparatus or methods adapted for special purposes not provided for in groups for incorporation in vehicles having electrical weight-sensitive devices
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
G06Q 10/0631 - Resource planning, allocation, distributing or scheduling for enterprises or organisations
Example aspects of the present disclosure describe determining, using a machine-learned model framework, a motion trajectory for an autonomous platform. The motion trajectory can be determined based at least in part on a plurality of costs based at least in part on a distribution of probabilities determined conditioned on the motion trajectory.
Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes obtaining, from an autonomy system, data indicative of a planned trajectory of the autonomous vehicle through a surrounding environment. The method includes determining a region of interest in the surrounding environment based at least in part on the planned trajectory. The method includes controlling one or more first sensors to obtain data indicative of the region of interest. The method includes identifying one or more objects in the region of interest, based at least in part on the data obtained by the one or more first sensors. The method includes controlling the autonomous vehicle based at least in part on the one or more objects identified in the region of interest.
Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
18.
Systems and Methods for Generating Synthetic Sensor Data via Machine Learning
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
Systems and methods for vehicle message signing are provided. A method includes obtaining, by a vehicle computing system of an autonomous vehicle, a computing system state associated with the vehicle computing system and a message from at least one remote process running a computing device remote from the vehicle computing system. The message is associated with an intended recipient process running on the vehicle computing system. The method includes determining an originating sender for the message. The originating sender is indicative of a remote process that generated the message. The method includes determining a routing action for the message based on a comparison of the originating sender and the computing system state. The routing action includes at least one of a discarding action or a forwarding action to the intended recipient process. The method includes performing the routing action for the message.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
20.
Multi-Model Switching on a Collision Mitigation System
Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes receiving data indicative of an operating mode of the vehicle, wherein the vehicle is configured to operate in a plurality of operating modes. The method includes determining one or more response characteristics of the vehicle based at least in part on the operating mode of the vehicle, each response characteristic indicating how the vehicle responds to a potential collision. The method includes controlling the vehicle based at least in part on the one or more response characteristics.
Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided. An autonomous vehicle may include or access a machine-learned motion planning model including a backbone network configured to generate a cost volume including data indicative of a cost associated with future locations of the autonomous vehicle. The cost volume can be generated from raw sensor data as part of motion planning for the autonomous vehicle. The backbone network can generate intermediate representations associated with object detections and objection predictions. The motion planning model can include a trajectory generator configured to evaluate one or more potential trajectories for the autonomous vehicle and to select a target trajectory based at least in part on the cost volume generate by the backbone network.
A LIDAR sensor for an autonomous vehicle (AV) can include one or more lasers outputting one or more laser beams, one or more non-mechanical optical components to (i) receive the one or more laser beams, (ii) configure a field of view of the LIDAR sensor, and (iii) output modulated frequencies from the one or more laser beams, and one or more photodetectors to detect return signals based on the outputted modulated frequencies from the one or more laser beams.
G01S 17/42 - Simultaneous measurement of distance and other coordinates
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G02B 26/12 - Scanning systems using multifaceted mirrors
23.
Jointly Learnable Behavior and Trajectory Planning for Autonomous Vehicles
Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
Systems and methods are provided for forecasting the motion of actors within a surrounding environment of an autonomous platform. For example, a computing system of an autonomous platform can use machine-learned model(s) to generate actor-specific graphs with past motions of actors and the local map topology. The computing system can project the actor-specific graphs of all actors to a global graph. The global graph can allow the computing system to determine which actors may interact with one another by propagating information over the global graph. The computing system can distribute the interactions determined using the global graph to the individual actor-specific graphs. The computing system can then predict a motion trajectory for an actor based on the associated actor-specific graph, which captures the actor-to-actor interactions and actor-to-map relations.
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
Systems and methods for controlling the motion of an autonomous are provided. In one example embodiment, a computer-implemented method includes obtaining data associated with an object within a surrounding environment of an autonomous vehicle. The data associated with the object is indicative of a predicted motion trajectory of the object. The method includes determining a vehicle action sequence based at least in part on the predicted motion trajectory of the object. The vehicle action sequence is indicative of a plurality of vehicle actions for the autonomous vehicle at a plurality of respective time steps associated with the predicted motion trajectory. The method includes determining a motion plan for the autonomous vehicle based at least in part on the vehicle action sequence. The method includes causing the autonomous vehicle to initiate motion control in accordance with at least a portion of the motion plan.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
27.
Multi-Channel Light Detection and Ranging (LIDAR) Unit Having a Telecentric Lens Assembly and Single Circuit Board for Emitters and Detectors
A LIDAR unit includes a housing defining a cavity. The LIDAR unit further include a plurality of emitters disposed on a circuit board within the cavity. Each of the emitters emits a laser beam along a transmit path. The LIDAR system further includes a first telecentric lens assembly positioned within the cavity and along the transmit path such that the laser beam emitted from each of the plurality of emitters passes through the first telecentric lens assembly. The LIDAR further includes a second telecentric lens assembly positioned within the cavity and along a receive path such that a plurality of reflected laser beams entering the cavity pass through the second telecentric lens assembly. The first telecentric lens assembly and the second telecentric lens assembly each include a field flattening lens and at least one other lens.
Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
G06N 3/04 - Architecture, e.g. interconnection topology
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06F 18/21 - Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
Systems, methods, tangible non-transitory computer-readable media, and devices associated with the operation of a vehicle are provided. For example a vehicle computing system can receive occupancy data that includes information associated with occupancy of a vehicle that includes seats. One or more states of the vehicle can be determined. The states of the vehicle can include a disposition of any object that is within the vehicle. Further, a configuration of the seats in the vehicle can be determined based on the occupancy data and the states of the vehicle. The configuration can include a disposition of the seats inside the vehicle. Furthermore, at least one of the seats can be adjusted based on the configuration that was determined.
B60N 2/01 - Arrangement of seats relative to one another
B60N 2/02 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable
B60N 3/00 - Arrangements or adaptations of other passenger fittings, not otherwise provided for
B60N 2/06 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable the whole seat being movable slidable
B60N 2/00 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles
B60N 2/14 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable the whole seat being movable rotatable, e.g. to permit easy access
Systems, methods, tangible non-transitory computer-readable media, and devices for associating objects are provided. For example, the disclosed technology can receive sensor data associated with the detection of objects over time. An association dataset can be generated and can include information associated with object detections of the objects at a most recent time interval and object tracks of the objects at time intervals in the past. A subset of the association dataset including the object detections that satisfy some association subset criteria can be determined. Association scores for the object detections in the subset of the association dataset can be determined. Further, the object detections can be associated with the object tracks based on the association scores for each of the object detections in the subset of the association dataset that satisfy some association criteria.
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/30 - Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
31.
Light Detection and Ranging (LIDAR) System Having a Polarizing Beam Splitter
A LIDAR system includes a plurality of LIDAR units. Each of the LIDAR units includes a housing defining a cavity. Each of the LIDAR units further includes a plurality of emitters disposed within the cavity. Each of the plurality of emitters is configured to emit a laser beam. The LIDAR system includes a rotating mirror and a retarder. The retarder is configurable in at least a first mode and a second mode to control a polarization state of a plurality of laser beams emitted from each of the plurality of LIDAR units. The LIDAR system includes a polarizing beam splitter positioned relative to the retarder such that the polarizing beam splitter receives a plurality of laser beams exiting the retarder. The polarizing beam is configured to transmit or reflect the plurality of laser beams exiting the retarder based on the polarization state of the laser beams exiting the retarder.
G01S 7/499 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group using polarisation effects
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
B60W 30/095 - Predicting travel path or likelihood of collision
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G02B 6/27 - Optical coupling means with polarisation selective and adjusting means
32.
Systems and Methods for Seat Reconfiguration for Autonomous Vehicles
Systems and methods for reconfiguring seats of an autonomous vehicle is provided. The method includes obtaining service request data that includes a service selection and request characteristics. The method includes obtaining data describing an initial seat configuration for each of a plurality of seats of an autonomous vehicle assigned to the service request. The initial seat configuration can include a seat position and a seat orientation for each of the plurality of seats. The method includes generating, based on the initial cabin configuration and the service request data, seat adjustment instructions configured to adjust the initial seat configuration of at least one of the seats. The method includes providing the seat adjustment instructions to the autonomous vehicle assigned to the service request.
Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06T 3/00 - Geometric image transformation in the plane of the image
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
An autonomous vehicle computing system can include a primary perception system configured to receive a plurality of sensor data points as input generate primary perception data representing a plurality of classifiable objects and a plurality of paths representing tracked motion of the plurality of classifiable objects. The autonomous vehicle computing system can include a secondary perception system configured to receive the plurality of sensor data points as input, cluster a subset of the plurality of sensor data points of the sensor data to generate one or more sensor data point clusters representing one or more unclassifiable objects that are not classifiable by the primary perception system, and generate secondary path data representing tracked motion of the one or more unclassifiable objects. The autonomous vehicle computing system can generate fused perception data based on the primary perception data and the one or more unclassifiable objects.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
G01S 17/58 - Velocity or trajectory determination systems; Sense-of-movement determination systems
G05D 1/02 - Control of position or course in two dimensions
Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes obtaining sensor data indicative of a surrounding environment of the autonomous vehicle, the surrounding environment including one or more occluded sensor zones. The method includes determining that a first occluded sensor zone of the occluded sensor zone(s) is occupied based at least in part on the sensor data. The method includes, in response to determining that the first occluded sensor zone is occupied, controlling the autonomous vehicle to travel clear of the first occluded sensor zone.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
36.
Systems and Methods for Generating Motion Forecast Data for Actors with Respect to an Autonomous Vehicle and Training a Machine Learned Model for the Same
Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
Aspects of the present disclosure involve systems, methods, and devices for mitigating Lidar cross-talk. Consistent with some embodiments, a Lidar system is configured to include one or more noise source detectors that detect noise signals that may produce noise in return signals received at the Lidar system. A noise source detector comprises a light sensor to receive a noise signal produced by a noise source and a timing circuit to provide a timing signal indicative of a direction of the noise source relative to an autonomous vehicle on which the Lidar system is mounted. A noise source may be an external Lidar system or a surface in the surrounding environment that is reflecting light signals such as those emitted by an external Lidar system.
G01S 7/4865 - Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
G01S 7/4863 - Detector arrays, e.g. charge-transfer gates
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 7/495 - Counter-measures or counter-counter-measures
38.
Passenger Seats and Doors for an Autonomous Vehicle
An autonomous can include one or more configurable passenger seats to accommodate a plurality of different seating configurations. For instance, the one or more passenger seats can include a passenger seat defining a seating orientation. The passenger seat can be configurable in a first configuration in which the seating orientation is directed towards a forward end of the autonomous vehicle and a second configuration in which the seating orientation is directed towards a rear end of the autonomous vehicle. The passenger seat can include a seatback rotatable about a pivot point on a base of the passenger seat to switch between the first configuration and the second configuration. Alternatively, or additionally, the autonomous vehicle can include a door assembly pivotably fixed to a vehicle body of the autonomous vehicle such that a swept path of the door assembly when moving between an open position and a closed position is reduced.
A planar-beam, light detection and ranging (PLADAR) system can include a laser scanner that emits a planar-beam, and a detector array that detects reflected light from the planar beam.
Systems and methods are provided for remotely detecting a status associated with an autonomous vehicle and generating control actions in response to such detections. In one example, a computing system can access a third-party communication associated with an autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a predetermined identifier associated with the autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a status associated with the autonomous vehicle, and transmit one or more control messages to the autonomous vehicle based at least in part on the predetermined identifier and the status associated with the autonomous vehicle.
Example aspects of the present disclosure describe a scene generator for simulating scenes in an environment. For example, snapshots of simulated traffic scenes can be generated by sampling a joint probability distribution trained on real-world traffic scenes. In some implementations, samples of the joint probability distribution can be obtained by sampling a plurality of factorized probability distributions for a plurality of objects for sequential insertion into the scene.
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G06V 20/54 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
42.
Systems and Methods for Onboard Vehicle Certificate Distribution
Systems and methods for onboard vehicle certificate distribution are provided. A system can include a plurality of devices including a master device for authenticating processes and one or more requesting devices. The master device can include a master host security service configured to authenticate the one or more processes of the system. The master host security service can run a certificate authority to generate a root certificate and a private root key corresponding to the root certificate. A respective host security service can receive a request for a process manifest for a requesting process of a respective device from a respective orchestration service. The respective host security service can generate the process manifest for the requesting process and provide the process manifest to the requesting process. The requesting process can use the process manifest to communicate with the certificate authority to obtain an operational certificate based on the root certificate.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06F 3/06 - Digital input from, or digital output to, record carriers
A light detection and ranging (LIDAR) sensor assembly can comprise an optics assembly that includes a LIDAR sensor and a set of dovetail joint inserts. The LIDAR sensor assembly can further include a frame comprising a set of dovetail joint septums coupled to the set of dovetail joint inserts of the optics assembly.
An on-board computing system for a vehicle is configured to generate and selectively present a set of autonomous-switching directions within a navigation user interface for the operator of the vehicle. The autonomous-switching directions can inform the operator regarding changes to the vehicle's mode of autonomous operation. The on-board computing system can generate the set of autonomy-switching directions based on the vehicle's route and other information associated with the route, such as autonomous operation permissions (AOPs) for route segments that comprise the route. The on-board computing device can selectively present the autonomy-switching directions based on locations associated with anticipated changes in autonomous operations determined for the route of the vehicle, the vehicle's location, and the vehicle's speed. In addition, the on-board computing device is further configured to present audio alerts associated with the autonomy-switching directions to the operator of the vehicle.
G01C 21/36 - Input/output arrangements for on-board computers
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
45.
Autonomous vehicle with independent auxiliary control units
An autonomous vehicle which includes multiple independent control systems that provide redundancy as to specific and critical safety situations which may be encountered when the autonomous vehicle is in operation.
B60W 30/08 - Predicting or avoiding probable or impending collision
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60T 8/1755 - Brake regulation specially adapted to control the stability of the vehicle, e.g. taking into account yaw rate or transverse acceleration in a curve
G05D 1/02 - Control of position or course in two dimensions
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
46.
Systems and methods for a moveable cover panel of an autonomous vehicle
Systems and methods for a moveable cover panel of an autonomous vehicle is provided. A vehicle can include a front panel disposed proximate to the front end of the passenger compartment, a vehicle motion control device located at the front panel, and a cover panel located at the front panel. The cover panel moveable relative to the front panel between an isolating position and an exposing position. The cover panel can isolate the vehicle motion control device from the passenger compartment when in the isolating position and expose the vehicle motion control device to the passenger compartment when in the exposing position. A method can include obtaining vehicle data identifying an operational mode, state, and/or status of the vehicle, determining a first position of the cover panel, and initiating a positional change for the cover panel based on the vehicle data and the first position.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60R 21/205 - Arrangements for storing inflatable members in their non-use or deflated condition; Arrangement or mounting of air bag modules or components in dashboards
B62D 1/183 - Steering columns yieldable or adjustable, e.g. tiltable adjustable between in-use and out-of-use positions, e.g. to improve access
47.
Continuous convolution and fusion in neural networks
Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
48.
Automatic Robotically Steered Sensor for Targeted High Performance Perception and Vehicle Control
Disclosed are methods, systems, and non-transitory computer readable media that control an autonomous vehicle via at least two sensors. One aspect includes capturing an image of a scene ahead of the vehicle with a first sensor, identifying an object in the scene at a confidence level based on the image, determining the confidence level of the identifying is below a threshold, in response to the confidence level being below the threshold, directing a second sensor having a field of view smaller than the first sensor to generate a second image including a location of the identified object, further identifying the object in the scene based on the second image, controlling the vehicle based on the further identification of the object.
G05D 1/02 - Control of position or course in two dimensions
G06T 3/40 - Scaling of a whole image or part thereof
G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
G08G 1/015 - Detecting movement of traffic to be counted or controlled with provision for distinguishing between motor cars and cycles
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
H04N 23/695 - Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
49.
Discrete Decision Architecture for Motion Planning System of an Autonomous Vehicle
The present disclosure provides autonomous vehicle systems and methods that include or otherwise leverage a motion planning system that generates constraints as part of determining a motion plan for an autonomous vehicle (AV). In particular, a scenario generator within a motion planning system can generate constraints based on where objects of interest are predicted to be relative to an autonomous vehicle. A constraint solver can identify navigation decisions for each of the constraints that provide a consistent solution across all constraints. The solution provided by the constraint solver can be in the form of a trajectory path determined relative to constraint areas for all objects of interest. The trajectory path represents a set of navigation decisions such that a navigation decision relative to one constraint doesn’t sacrifice an ability to satisfy a different navigation decision relative to one or more other constraints.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
B60W 30/16 - Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
G05D 1/02 - Control of position or course in two dimensions
Systems and methods for basis path generation are provided. In particular, a computing system can obtain a target nominal path. The computing system can determine a current pose for an autonomous vehicle. The computing system can determine, based at least in part on the current pose of the autonomous vehicle and the target nominal path, a lane change region. The computing system can determine one or more merge points on the target nominal path. The computing system can, for each respective merge point in the one or more merge points, generate a candidate basis path from the current pose of the autonomous vehicle to the respective merge point. The computing system can generate a suitability classification for each candidate basis path. The computing system can select one or more candidate basis paths based on the suitability classification for each respective candidate basis path in the plurality of candidate basis paths.
The present disclosure provides systems and methods to obtain feedback descriptive of autonomous vehicle failures. In particular, the systems and methods of the present disclosure can detect that a vehicle failure event occurred at an autonomous vehicle and, in response, provide an interactive user interface that enables a human located within the autonomous vehicle to enter feedback that describes the vehicle failure event. Thus, the systems and methods of the present disclosure can actively prompt and/or enable entry of feedback in response to a particular instance of a vehicle failure event, thereby enabling improved and streamlined collection of information about autonomous vehicle failures.
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
B60K 35/00 - Arrangement or adaptations of instruments
G01C 21/36 - Input/output arrangements for on-board computers
G05D 1/02 - Control of position or course in two dimensions
Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle. For example, multi-sensor fusion can be implemented via continuous convolutions to fuse image data samples and LIDAR feature maps at different levels of resolution.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
53.
Systems and Methods for Vehicle Spatial Path Sampling
Systems and methods for vehicle spatial path sampling are provided. The method includes obtaining an initial travel path for an autonomous vehicle from a first location to a second location and vehicle configuration data indicative of one or more physical constraints of the autonomous vehicle. The method includes determining one or more secondary travel paths for the autonomous vehicle from the first location to the second location based on the initial travel path and the vehicle configuration data. The method includes generating a spatial envelope based on the one or more secondary travel paths that indicates a plurality of lateral offsets from the initial travel path. And, the method includes generating a plurality of trajectories for the autonomous vehicle to travel from the first location to the second location such that each of the plurality of trajectories include one or more lateral offsets identified by the spatial envelope.
Aspects of the present disclosure involve systems, methods, and devices for fault detection in a Lidar system. A fault detection system obtains incoming Lidar data output by a Lidar system during operation of an AV system. The incoming Lidar data includes one or more data points corresponding to a fault detection target on an exterior of a vehicle of the AV system. The fault detection system accesses historical Lidar data that is based on data previously output by the Lidar system. The historical Lidar data corresponds to the fault detection target. The fault detection system performs a comparison of the incoming Lidar data with the historical Lidar data to identify any differences between the two sets of data. The fault detection system detects a fault condition occurring at the Lidar system based on the comparison.
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G05D 1/02 - Control of position or course in two dimensions
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
55.
Systems and methods for remote status detection of autonomous vehicles
Systems and methods are provided for remotely detecting a status associated with an autonomous vehicle and generating control actions in response to such detections. In one example, a computing system can access a third-party communication associated with an autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a predetermined identifier associated with the autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a status associated with the autonomous vehicle, and transmit one or more control messages to the autonomous vehicle based at least in part on the predetermined identifier and the status associated with the autonomous vehicle.
The present disclosure is directed to controlling state transitions in an autonomous vehicle. In particular, a computing system can initiate the autonomous vehicle into a no-authorization state upon startup. The computing system can receive an authorization request. The computing system determines whether the authorization request includes a request to enter the first or second mode of operations, wherein the first mode of operations is associated with the autonomous vehicle being operated without a human operator and the second mode of operations is associated with the autonomous vehicle being operable by a human operator. The computing system can transition the autonomous vehicle from the no-authorization state into a standby state in response to determining the authorization request includes a request to enter the first mode of operations or into a manual-controlled state in response to determining the authorization request is a request to enter the second mode of operations.
Systems and methods are provided for generating data indicative of a friction associated with a driving surface, and for using the friction data in association with one or more vehicles. In one example, a computing system can detect a stop associated with a vehicle and initiate a steering action of the vehicle during the stop. The steering action is associated with movement of at least one tire of the vehicle relative to a driving surface. The computing system can obtain operational data associated with the steering action during the stop of the vehicle. The computing system can determine a friction associated with the driving surface based at least in part on the operational data associated with the steering action. The computing system can generate data indicative of the friction associated with the driving surface.
B60T 8/1763 - Brake regulation specially adapted to prevent excessive wheel slip during vehicle deceleration, e.g. ABS responsive to the coefficient of friction between the wheels and the ground surface
B60T 8/171 - Detecting parameters used in the regulation; Measuring values used in the regulation
B60T 8/1755 - Brake regulation specially adapted to control the stability of the vehicle, e.g. taking into account yaw rate or transverse acceleration in a curve
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
Systems and methods of the present disclosure provide an improved approach for open-set instance segmentation by identifying both known and unknown instances in an environment. For example, a method can include receiving sensor point cloud input data including a plurality of three-dimensional points. The method can include determining a feature embedding and at least one of an instance embedding, class embedding, and/or background embedding for each of the plurality of three-dimensional points. The method can include determining a first subset of points associated with one or more known instances within the environment based on the class embedding and the background embedding associated with each point in the plurality of points. The method can include determining a second subset of points associated with one or more unknown instances within the environment based on the first subset of points. The method can include segmenting the input data into known and unknown instances.
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06N 3/084 - Backpropagation, e.g. using gradient descent
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
59.
Systems and methods for generating synthetic light detection and ranging data via machine learning
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
Systems, methods, tangible non-transitory computer-readable media, and devices associated with motion flow estimation are provided. For example, scene data including representations of an environment over a first set of time intervals can be accessed. Extracted visual cues can be generated based on the representations and machine-learned feature extraction models. At least one of the machine-learned feature extraction models can be configured to generate a portion of the extracted visual cues based on a first set of the representations of the environment from a first perspective and a second set of the representations of the environment from a second perspective. The extracted visual cues can be encoded using energy functions. Three-dimensional motion estimates of object instances at time intervals subsequent to the first set of time intervals can be determined based on the energy functions and machine-learned inference models.
Systems and methods for detecting a surprise or unexpected movement of an actor with respect to an autonomous vehicle are provided. An example computer-implemented method can include, for a first compute cycle, obtaining motion forecast data based on first sensor data collected with respect to an actor relative to an autonomous vehicle; and determining, based on the motion forecast data, failsafe region data representing an unexpected path or area where a likelihood of the actor following the unexpected path or entering the unexpected area is below a threshold. For a second compute cycle after the first compute cycle, the method can include obtaining second sensor data; determining, based on the second sensor data and the failsafe region data, that the actor has followed the unexpected path or entered the unexpected area; and in response to such determination, determining a deviation for controlling a movement of the autonomous vehicle.
Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g., fusing features derived from image data, light detection and ranging (LIDAR) data, and/or other sensor modalities) at both the point-wise and region of interest (ROI)-wise level, resulting in fully fused feature representations.
Systems and methods are provided for detecting objects of interest. A computing system can input sensor data to one or more first machine-learned models associated with detecting objects external to an autonomous vehicle. The computing system can obtain as an output of the first machine-learned models, data indicative of one or more detected objects. The computing system can determine data indicative of at least one uncertainty associated with the one or more detected objects and input the data indicative of the one or more detected objects and the data indicative of the at least one uncertainty to one or more second machine-learned models. The computing system can obtain as an output of the second machine-learned models, data indicative of at least one prediction associated with the one or more detected objects. The at least one prediction can be based at least in part on the detected objects and the uncertainty.
G05D 1/02 - Control of position or course in two dimensions
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Systems and methods for training machine-learned models are provided. A method can include receiving a rasterized image associated with a training object and generating a predicted trajectory of the training object by inputting the rasterized image into a first machine-learned model. The method can include converting the predicted trajectory into a rasterized trajectory that spatially corresponds to the rasterized image. The method can include utilizing a second machine-learned model to determine an accuracy of the predicted trajectory based on the rasterized trajectory. The method can include determining an overall loss for the first machine-learned model based on the accuracy of the predictive trajectory as determined by the second machine-learned model. The method can include training the first machine-learned model by minimizing the overall loss for the first machine-learned model.
Systems and methods are directed to automated delivery systems. In one example, a vehicle is provided including a drive system, a passenger cabin; and a delivery service pod provided relative to the passenger cabin. The delivery service pod includes an access unit configured to allow for loading and unloading of a plurality of delivery crates into the delivery service pod. The delivery service pod further includes a conveyor unit comprising multiple delivery crate holding positions, the delivery crate holding positions being defined by neighboring sidewalls spaced apart within the delivery service pod such that a respective delivery crate of the plurality of delivery crates can be positioned between neighboring sidewalls, wherein the conveyor unit is configured to be rotated to align each of the delivery crate holding positions with the access unit.
B60P 3/00 - Vehicles adapted to transport, to carry or to comprise special loads or objects
A47L 7/00 - Suction cleaners adapted for additional purposes; Tables with suction openings for cleaning purposes; Containers for cleaning articles by suction; Suction cleaners adapted to cleaning of brushes; Suction cleaners adapted to taking-up liquids
B25J 11/00 - Manipulators not otherwise provided for
B60S 1/64 - Other vehicle fittings for cleaning for cleaning vehicle interiors, e.g. built-in vacuum cleaners
B60P 1/36 - Vehicles predominantly for transporting loads and modified to facilitate loading, consolidating the load, or unloading using endless chains or belts thereon
66.
Multiple stage image based object detection and recognition
Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
G06V 10/28 - Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
G06V 10/50 - Extraction of image or video features by summing image-intensity values; Projection analysis
G06V 10/56 - Extraction of image or video features relating to colour
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Aspects of the present disclosure involve a vehicle computer system comprising a computer-readable storage medium storing a set of instructions, and a method for online light detection and ranging (Lidar) intensity normalization. Consistent with some embodiments, the method may include accumulating point data output by a channel of a Lidar unit during operation of an autonomous or semi-autonomous vehicle. The accumulated point data includes raw intensity values that correspond to a particular surface type. The method further includes calculating a median intensity value based on the raw intensity values and generating an intensity normalization multiplier for the channel based on the median intensity value. The intensity normalization multiplier, when applied to the median intensity value, results in a reflectivity value that corresponds to the particular surface type. The method further includes applying the intensity normalization multiplier to the point data output by the channel to produce normalized intensity values.
G01S 17/87 - Combinations of systems using electromagnetic waves other than radio waves
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/06 - Systems determining position data of a target
Systems and methods for power and thermal management of autonomous vehicles are provided. In one example embodiment, a computing system includes processor(s) and one or more tangible, non-transitory, computer readable media that collectively store instructions that when executed by the processor(s) cause the computing system to perform operations. The operations include obtaining data associated with an autonomous vehicle. The operations include identifying one or more vehicle parameters associated with the autonomous vehicle based at least in part on the data associated with the autonomous vehicle. The operations include determining a modification to one or more operating characteristics of one or more systems onboard the autonomous vehicle based at least in part on the one or more vehicle parameters. The operations include controlling a heat generation of at least a portion of the autonomous vehicle via implementation of the modification of the operating characteristic(s) of the system(s) onboard the autonomous vehicle.
Systems, methods, tangible non-transitory computer-readable media, and devices associated with trajectory prediction are provided. For example, trajectory data and goal path data can be accessed. The trajectory data can be associated with an object's predicted trajectory. The predicted trajectory can include waypoints associated with waypoint position uncertainty distributions that can be based on an expectation maximization technique. The goal path data can be associated with a goal path and include locations the object is predicted to travel. Solution waypoints for the object can be determined based on application of optimization techniques to the waypoints and waypoint position uncertainty distributions. The optimization techniques can include operations to maximize the probability of each of the solution waypoints. Stitched trajectory data can be generated based on the solution waypoints. The stitched trajectory data can be associated with portions of the solution waypoints and the goal path.
Systems and methods are described that probabilistically predict dynamic object behavior. In particular, in contrast to existing systems which attempt to predict object trajectories directly (e.g., directly predict a specific sequence of well-defined states), a probabilistic approach is instead leveraged that predicts discrete probability distributions over object state at each of a plurality of time steps. In one example, systems and methods predict future states of dynamic objects (e.g., pedestrians) such that an autonomous vehicle can plan safer actions/movement.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06K 9/62 - Methods or arrangements for recognition using electronic means
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
Disclosed herein are methods and systems for performing instance segmentation that can provide improved estimation of object boundaries. Implementations can include a machine-learned segmentation model trained to estimate an initial object boundary based on a truncated signed distance function (TSDF) generated by the model. The model can also generate outputs for optimizing the TSDF over a series of iterations to produce a final TSDF that can be used to determine the segmentation mask.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/77 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
G06F 18/213 - Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Systems, methods, tangible non-transitory computer-readable media, and devices associated with radar validation and calibration are provided. For example, target positions for targets can be determined based on imaging devices. The targets can be located at respective predetermined positions relative to the imaging devices. Radar detections of the targets can be generated based on radar devices. The radar devices can be located at a predetermined position relative to the imaging devices. Filtered radar detections can be generated based on performance of filtering operations on the radar detections. A detection error can be determined for the radar devices based on calibration operations performed using the filtered radar detections and the target positions determined based on the one or more imaging devices. Furthermore, the radar devices can be calibrated based on the detection error.
Systems and methods for determining a location based on image data are provided. A method can include receiving, by a computing system, a query image depicting a surrounding environment of a vehicle. The query image can be input into a machine-learned image embedding model and a machine-learned feature extraction model to obtain a query embedding and a query feature representation, respectively. The method can include identifying a subset of candidate embeddings that have embeddings similar to the query embedding. The method can include obtaining a respective feature representation for each image associated with the subset of candidate embeddings. The method can include determining a set of relative displacements between each image associated with the subset of candidate embeddings and the query image and determining a localized state of a vehicle based at least in part on the set of relative displacements.
Systems, methods, tangible non-transitory computer-readable media, and devices associated with the operation of a vehicle are provided. For example a vehicle computing system can receive occupancy data that includes information associated with occupancy of a vehicle that includes seats. One or more states of the vehicle can be determined. The states of the vehicle can include a disposition of any object that is within the vehicle. Further, a configuration of the seats in the vehicle can be determined based on the occupancy data and the states of the vehicle. The configuration can include a disposition of the seats inside the vehicle. Furthermore, at least one of the seats can be adjusted based on the configuration that was determined.
B60N 2/01 - Arrangement of seats relative to one another
B60N 2/02 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable
B60N 2/06 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable the whole seat being movable slidable
B60N 3/00 - Arrangements or adaptations of other passenger fittings, not otherwise provided for
B60N 2/00 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles
B60N 2/14 - Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable the whole seat being movable rotatable, e.g. to permit easy access
75.
Systems and Methods For Deploying Warning Devices From an Autonomous Vehicle
Systems and methods are directed to deploying warning devices by an autonomous vehicle. In one example, a system includes one or more processors and memory including instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations include obtaining data indicating that a vehicle stop maneuver is to be implemented for an autonomous vehicle. The operations further include determining whether one or more warning devices should be dispensed from the autonomous vehicle during the vehicle stop maneuver based in part on the obtained data. The operations further include, in response to determining one or more warning devices should be dispensed from the autonomous vehicle, determining a dispensing maneuver for the one or more warning devices, and providing one or more command signals to one or more vehicle systems to perform the dispensing maneuver for the one or more warning devices.
A control system of a self-driving tractor can access sensor data to determine a set of trailer configuration parameters of a cargo trailer coupled to the self-driving tractor. Based on the set of trailer configuration parameters, the control system can configure a motion planning model for autonomously controlling the acceleration, braking, and steering systems of the tractor.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
B62D 13/00 - Steering specially adapted for trailers
G05D 1/02 - Control of position or course in two dimensions
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
77.
System and method for determining object intention through visual attributes
Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
78.
Multi-task machine-learned models for object intention determination in autonomous driving
Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
The present disclosure provides a sensor cleaning system that cleans one or more sensors of an autonomous vehicle. Each sensor can have one or more corresponding sensor cleaning units that are configured to clean such sensor using a fluid (e.g., a gas or a liquid). Thus, the sensor cleaning system can include both a gas cleaning system and a liquid cleaning system. According to one aspect, the sensor cleaning system can provide individualized cleaning of the autonomous vehicle sensors. According to another aspect, a liquid cleaning system can be pressurized or otherwise powered by the gas cleaning system or other gas system.
Systems and methods for providing an autonomous vehicle service are provided. A method can include obtaining data indicative of a service associated with a user, and obtaining data indicative of a transportation of an autonomous robot. The method can include determining one or more service configurations for the service. The method can include obtaining data indicative of a selected service configuration from among the one or more service configurations, and determining a service assignment for an autonomous vehicle based at least in part on the selected service configuration. The service assignment can indicate that the autonomous vehicle is to transport the user from the service-start location to the service-end location. The method can include communicating data indicative of the service assignment to the autonomous vehicle to perform the service.
Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
G06V 10/28 - Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
G06V 10/50 - Extraction of image or video features by summing image-intensity values; Projection analysis
G06V 10/56 - Extraction of image or video features relating to colour
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
In one example embodiment, a computer-implemented method for autonomous vehicle control includes determining whether a cargo container is attached to the vehicle base. The method includes controlling a front shield associated with an autonomous vehicle to move from a closed position to an opened position when a cargo container is determined to be attached to a vehicle base associated with the autonomous vehicle. The method includes controlling the front shield to move from the opened position to the closed position when the cargo container is not attached to the vehicle base.
B62D 33/08 - Superstructures for load-carrying vehicles characterised by the connection of the superstructure to the vehicle frame comprising adjustable means
G05D 1/02 - Control of position or course in two dimensions
B62D 35/00 - Vehicle bodies characterised by streamlining
B60Q 1/28 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating front of vehicle
Systems, methods, tangible non-transitory computer-readable media, and devices associated with object association and tracking are provided. Input data can be obtained. The input data can be indicative of a detected object within a surrounding environment of an autonomous vehicle and an initial object classification of the detected object at an initial time interval and object tracks at time intervals preceding the initial time interval. Association data can be generated based on the input data and a machine-learned model. The association data can indicate whether the detected object is associated with at least one of the object tracks. An object classification probability distribution can be determined based on the association data. The object classification probability distribution can indicate a probability that the detected object is associated with each respective object classification. The association data and the object classification probability distribution for the detected object can be outputted.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Aspects of the present disclosure involve systems, methods, and devices for determining specular reflectivity characteristics of objects using a Lidar system of an autonomous vehicle (AV) system. A method includes transmitting at least two light signals directed at a target object utilizing the Lidar system of the AV system. The method further includes determining at least two reflectivity values for the target object based on return signals corresponding to the at least two light signals. The method further includes classifying specular reflectivity characteristics of the target object based on a comparison of the first and second reflectivity value. The method further includes updating a motion plan for the AV system based on the specular reflectivity characteristics of the target object.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/93 - Lidar systems, specially adapted for specific applications for anti-collision purposes
An autonomous robot is provided. In one example embodiment, an autonomous robot can include a main body including one or more compartments. The one or more compartments can be configured to provide support for transporting an item. The autonomous robot can include a mobility assembly affixed to the main body and a sensor configured to obtain sensor data associated with a surrounding environment of the autonomous robot. The autonomous robot can include a computing system configured to plan a motion of the autonomous robot based at least in part on the sensor data. The computing system can be operably connected to the mobility assembly for controlling a motion of the autonomous robot. The autonomous robot can include a coupling assembly configured to temporarily secure the autonomous robot to an autonomous vehicle. The autonomous robot can include a power system and a ventilation system that can interface with the autonomous vehicle.
In one example embodiment, a computer-implemented method for transporting cargo using smart palettes includes determining receipt of a first cargo onto a platform of a first smart palette at a first distribution hub. The method includes generating one or more signals that control a loading of the first smart palette and the first cargo onto a trailer located at the first distribution hub. The method includes determining a coordination with one or more second smart palettes associated with the trailer to determine a first position inside the trailer for the first smart palette and the first cargo. The method includes generating one or more signals that position the first smart palette and the first cargo at the first position inside the trailer.
Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle. For example, multi-sensor fusion can be implemented via continuous convolutions to fuse image data samples and LIDAR feature maps at different levels of resolution.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
89.
Systems and Methods for Autonomous Vehicle Controls
Systems and methods for controlling autonomous vehicle are provided. A method can include obtaining, by a computing system, data indicative of a plurality of objects in a surrounding environment of the autonomous vehicle. The method can further include determining, by the computing system, one or more clusters of the objects based at least in part on the data indicative of the plurality of objects. The method can further include determining, by the computing system, whether to enter an operation mode having one or more limited operational capabilities based at least in part on one or more properties of the one or more clusters. In response to determining that the operation mode is to be entered by the autonomous vehicle, the method can include controlling, by the computing system, the operation of the autonomous vehicle based at least in part on the one or more limited operational capabilities.
G05D 1/02 - Control of position or course in two dimensions
B60W 30/085 - Taking automatic action to adjust vehicle attitude in preparation for collision, e.g. braking for nose dropping
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
90.
Autonomous vehicle control based on risk-based interactions
Systems, methods, tangible non-transitory computer-readable media, and devices associated with vehicle control based on risk-based interactions are provided. For example, vehicle data and perception data can be accessed. The vehicle data can include the speed of an autonomous vehicle in an environment. The perception data can include location information and classification information associated with an object in the environment. A scenario exposure can be determined based on the vehicle data and perception data. Prediction data including predicted trajectories of the object can be accessed. Expected speed data can be determined based on hypothetical speeds and hypothetical distances between the vehicle and the object. A speed profile that satisfies a threshold criteria can be determining based on the scenario exposure, the prediction data, and the expected speed data, over a distance. A motion plan to control the autonomous vehicle can be generated based on the speed profile.
In one example embodiment, a computer-implemented method includes receiving data representing a motion plan of the autonomous vehicle via a plurality of control lanes configured to implement the motion plan to control a motion of the autonomous vehicle, the plurality of control lanes including at least a first control lane and a second control lane, and controlling the first control lane to implement the motion plan. The method includes detecting one or more faults associated with implementation of the motion plan by the first control lane or the second control lane, or in generation of the motion plan, and in response to one or more faults, controlling the first control lane or the second control lane to adjust the motion of the autonomous vehicle based at least in part on one or more fault reaction parameters associated with the one or more faults.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
B60W 50/023 - Avoiding failures by using redundant parts
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
92.
Systems and Methods for Error Sourcing in Autonomous Vehicle Simulation
Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first plurality of testing parameters for an autonomous vehicle testing scenario associated with a plurality of performance metrics based at least in part on a first sampling rule. The method can include simulating the autonomous vehicle testing scenario using the first plurality of testing parameters to obtain a first scenario output. The method can include evaluating an optimization function that evaluates the first scenario output to obtain simulation error data that corresponds to a performance metric. The method can include determining a second sampling rule associated with the performance metric. The method can include obtaining a second plurality of testing parameters for the autonomous vehicle testing scenario based at least in part on the second sampling rule.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G06F 30/20 - Design optimisation, verification or simulation
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
93.
Systems and Methods for Generation and Utilization of Vehicle Testing Knowledge Structures for Autonomous Vehicle Simulation
Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first vehicle testing tuple comprising a plurality of first testing parameters and a second vehicle testing tuple comprising a plurality of second testing parameters. The method can include determining that the plurality of first testing parameters are associated with an evaluated operating condition. The method can include appending the first tuple to a first portion of a plurality of portions of a vehicle testing knowledge structure. The method can include determining that a second testing parameter is associated with an unevaluated operating condition. The method can include evaluating the unevaluated operating condition. The method can include generating a second portion comprising the second vehicle testing tuple for the vehicle testing knowledge structure.
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
94.
Autonomous Vehicle Interface System With Multiple Interface Devices Featuring Redundant Vehicle Commands
The present disclosure provides an autonomous vehicle and associated interface system that includes multiple vehicle interface computing devices that provide redundant vehicle commands. As one example, an autonomous vehicle interface system can include a first vehicle interface computing device located within the autonomous vehicle and physically coupled to the autonomous vehicle. The first vehicle interface computing device can provide a first plurality of selectable vehicle commands to a human passenger of the autonomous vehicle. The autonomous vehicle interface system can further include a second vehicle interface computing device that provides a second plurality of selectable vehicle commands to the human passenger. For example, the second vehicle interface computing device can be the passenger's own device (e.g., smartphone). The second plurality of selectable vehicle commands can include at least some of the same vehicle commands as the first plurality of selectable vehicle commands.
Systems and methods for autonomous vehicle motion planning are provided. Sensor data describing an environment of an autonomous vehicle and an initial travel path for the autonomous vehicle through the environment can be obtained. A number of trajectories for the autonomous vehicle are generated based on the sensor data and the initial travel path. The trajectories can be evaluated by generating a number of costs for each trajectory. The costs can include a safety cost and a total cost. Each cost is generated by a cost function created in accordance with a number of relational propositions defining desired relationships between the number of costs. A subset of trajectories can be determined from the trajectories based on the safety cost and an optimal trajectory can be determined from the subset of trajectories based on the total cost. The autonomous vehicle can control its motion in accordance with the optimal trajectory.
An autonomous vehicle which includes multiple independent control systems that provide redundancy as to specific and critical safety situations which may be encountered when the autonomous vehicle is in operation.
B60W 30/08 - Predicting or avoiding probable or impending collision
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60T 8/1755 - Brake regulation specially adapted to control the stability of the vehicle, e.g. taking into account yaw rate or transverse acceleration in a curve
G05D 1/02 - Control of position or course in two dimensions
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
97.
Light Detection and Ranging (LIDAR) Assembly Having a Switchable Mirror
A LIDAR assembly is provided. The LIDAR assembly includes a LIDAR unit. The LIDAR unit includes a housing defining a cavity. The LIDAR unit further includes a plurality of emitters disposed within the cavity. Each of the plurality of emitters is configured to emit a laser beam. The LIDAR assembly further includes a switchable mirror. The switchable mirror is positioned relative to the LIDAR unit such that the switchable mirror receives a plurality of laser beams exiting the housing of the LIDAR unit. The switchable mirror is configurable in at least a reflective state to direct the plurality of laser beams along a first path and a transmissive state to direct the plurality of laser beams along a second path that is different than the first path to widen a field of view of the LIDAR unit along a first axis.
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
98.
Discrete Decision Architecture for Motion Planning System of an Autonomous Vehicle
The present disclosure provides autonomous vehicle systems and methods that include or otherwise leverage a motion planning system that generates constraints as part of determining a motion plan for an autonomous vehicle (AV). In particular, a scenario generator within a motion planning system can generate constraints based on where objects of interest are predicted to be relative to an autonomous vehicle. A constraint solver can identify navigation decisions for each of the constraints that provide a consistent solution across all constraints. The solution provided by the constraint solver can be in the form of a trajectory path determined relative to constraint areas for all objects of interest. The trajectory path represents a set of navigation decisions such that a navigation decision relative to one constraint doesn't sacrifice an ability to satisfy a different navigation decision relative to one or more other constraints.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
B60W 30/16 - Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
G05D 1/02 - Control of position or course in two dimensions
An on-demand transportation management system can collect vehicle fleet utilization data corresponding to human-driven vehicles (HDVs) and autonomous vehicles (AVs) operating within a given region in connection with an on-demand transportation service. The on-demand transportation management system can then establish a set of selection priorities for respective areas of the given region based on the vehicle fleet utilization data, each selection priority indicating whether a respective area of the given region is to favor HDVs or AVs for servicing transport requests.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
100.
Systems and Methods for Automated Testing of Autonomous Vehicles
Systems and methods for controlling autonomous vehicle test trips are provided. In one example embodiment, a computer implemented method includes obtaining, by a computing system, data indicative of a test trip index associated with an autonomous vehicle. The test trip index includes a plurality of test trips for the autonomous vehicle and each test trip is associated with one or more test trip parameters. The method includes obtaining, by the computing system, data indicating that the autonomous vehicle is available to travel in accordance with at least one of the test trips of the test trip index. The method includes selecting, by the computing system and from the test trip index, at least one selected test trip for the autonomous vehicle. The method includes causing, by the computing system, the autonomous vehicle to travel in accordance with the test trip parameters associated with the at least one selected test trip.