A computer-implemented method for determining scene-consistent motion forecasts from sensor data can include obtaining scene data including one or more actor features. The computer-implemented method can include providing the scene data to a latent prior model, the latent prior model configured to generate scene latent data in response to receipt of scene data, the scene latent data including one or more latent variables. The computer-implemented method can include obtaining the scene latent data from the latent prior model. The computer-implemented method can include sampling latent sample data from the scene latent data. The computer-implemented method can include providing the latent sample data to a decoder model, the decoder model configured to decode the latent sample data into a motion forecast including one or more predicted trajectories of the one or more actor features. The computer-implemented method can include receiving the motion forecast including one or more predicted trajectories of the one or more actor features from the decoder model.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06F 18/2137 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace basée sur des critères de préservation de la topologie, p.ex. positionnement multidimensionnel ou cartes auto-organisatrices
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G06V 30/19 - Reconnaissance utilisant des moyens électroniques
The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G01S 17/86 - Combinaisons de systèmes lidar avec des systèmes autres que lidar, radar ou sonar, p.ex. avec des goniomètres
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
3.
Systems and Methods for Providing a Vehicle Service Via a Transportation Network for Autonomous Vehicles
Systems and methods for providing a vehicle service are provided. In one example embodiment, a computer-implemented method includes receiving data indicative of a service request to provide a vehicle service for an entity with respect to one or more cargo items designated for autonomous transport. The method includes obtaining a first cargo item among the one or more cargo items, from a representative of the entity at a dedicated first transfer hub proximate to a first location associated with the first cargo item. The method includes controlling a first autonomous vehicle to transport the first cargo item from the first transfer hub to a dedicated second transfer hub proximate to a second location associated with the first cargo item. The method includes providing the first cargo item to a representative of the entity at the second transfer hub, to provide the vehicle service.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G01C 21/34 - Recherche d'itinéraire; Guidage en matière d'itinéraire
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06Q 10/047 - Optimisation des itinéraires ou des chemins, p. ex. problème du voyageur de commerce
G06Q 10/08 - Logistique, p.ex. entreposage, chargement ou distribution; Gestion d’inventaires ou de stocks
G06Q 50/28 - Logistique, p.ex. stockage, chargement, distribution ou expédition
Systems, methods, and vehicles for taking a vehicle out-of-service are provided. In one example embodiment, a method includes obtaining, by one or more computing devices on-board an autonomous vehicle, data indicative of one or more parameters associated with the autonomous vehicle. The autonomous vehicle is configured to provide a vehicle service to one or more users of the vehicle service. The method includes determining, by the computing devices, an existence of a fault associated with the autonomous vehicle based at least in part on the one or more parameters associated with the autonomous vehicle. The method includes determining, by the computing devices, one or more actions to be performed by the autonomous vehicle based at least in part on the existence of the fault. The method includes performing, by the computing devices, one or more of the actions to take the autonomous vehicle out-of-service based at least in part on the fault.
G07C 5/08 - Enregistrement ou indication de données de marche autres que le temps de circulation, de fonctionnement, d'arrêt ou d'attente, avec ou sans enregistrement des temps de circulation, de fonctionnement, d'arrêt ou d'attente
G06Q 10/047 - Optimisation des itinéraires ou des chemins, p. ex. problème du voyageur de commerce
G06Q 10/0631 - Planification, affectation, distribution ou ordonnancement de ressources d’entreprises ou d’organisations
G06Q 10/20 - Administration de la réparation ou de la maintenance des produits
G07C 5/00 - Enregistrement ou indication du fonctionnement de véhicules
An autonomous robot is provided. In one example embodiment, an autonomous robot can include a main body including one or more compartments. The one or more compartments can be configured to provide support for transporting an item. The autonomous robot can include a mobility assembly affixed to the main body and a sensor configured to obtain sensor data associated with a surrounding environment of the autonomous robot. The autonomous robot can include a computing system configured to plan a motion of the autonomous robot based at least in part on the sensor data. The computing system can be operably connected to the mobility assembly for controlling a motion of the autonomous robot. The autonomous robot can include a coupling assembly configured to temporarily secure the autonomous robot to an autonomous vehicle. The autonomous robot can include a power system and a ventilation system that can interface with the autonomous vehicle.
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
6.
Power and Thermal Management Systems and Methods for Autonomous Vehicles
Systems and methods for power and thermal management of autonomous vehicles are provided. In one example embodiment, a computing system includes processor(s) and one or more tangible, non-transitory, computer readable media that collectively store instructions that when executed by the processor(s) cause the computing system to perform operations. The operations include obtaining data associated with an autonomous vehicle. The operations include identifying one or more vehicle parameters associated with the autonomous vehicle based at least in part on the data associated with the autonomous vehicle. The operations include determining a modification to one or more operating characteristics of one or more systems onboard the autonomous vehicle based at least in part on the one or more vehicle parameters. The operations include controlling a temperature of at least a portion of the autonomous vehicle via implementation of the modification of the operating characteristic(s) of the system(s) onboard the autonomous vehicle.
G08G 1/0967 - Systèmes impliquant la transmission d'informations pour les grands axes de circulation, p.ex. conditions météorologiques, limites de vitesse
B60H 1/00 - Dispositifs de chauffage, de refroidissement ou de ventilation
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
7.
Systems and Methods for Training Probabilistic Object Motion Prediction Models Using Non-Differentiable Prior Knowledge
The present disclosure provides systems and methods for training probabilistic object motion prediction models using non-differentiable representations of prior knowledge. As one example, object motion prediction models can be used by autonomous vehicles to probabilistically predict the future location(s) of observed objects (e.g., other vehicles, bicyclists, pedestrians, etc.). For example, such models can output a probability distribution that provides a distribution of probabilities for the future location(s) of each object at one or more future times. Aspects of the present disclosure enable these models to be trained using non-differentiable prior knowledge about motion of objects within the autonomous vehicle's environment such as, for example, prior knowledge about lane or road geometry or topology and/or traffic information such as current traffic control states (e.g., traffic light status).
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05B 13/02 - Systèmes de commande adaptatifs, c. à d. systèmes se réglant eux-mêmes automatiquement pour obtenir un rendement optimal suivant un critère prédéterminé électriques
Systems, methods, tangible non-transitory computer-readable media, and devices for operating an autonomous vehicle are provided. For example, the disclosed technology can include receiving state data that includes information associated with states of an autonomous vehicle and an environment external to the autonomous vehicle. Responsive to the state data satisfying vehicle stoppage criteria, vehicle stoppage conditions can be determined to have occurred. A severity level of the vehicle stoppage conditions can be selected from a plurality of available severity levels respectively associated with a plurality of different sets of constraints. A motion plan can be generated based on the state data. The motion plan can include information associated with locations for the autonomous vehicle to traverse at time intervals corresponding to the locations. Further, the locations can include a current location of the autonomous vehicle and a destination location at which the autonomous vehicle stops traveling.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60T 7/22 - Organes d'attaque de la mise en action des freins par déclenchement non soumis à la volonté du conducteur ou du passager déclenchés par le contact du véhicule, p.ex. du pare-chocs, avec un obstacle extérieur, p.ex. un autre véhicule
B60W 30/09 - Entreprenant une action automatiquement pour éviter la collision, p.ex. en freinant ou tournant
B60W 30/095 - Prévision du trajet ou de la probabilité de collision
Systems and methods for controlling a failover response of an autonomous vehicle are provided. In one example embodiment, a method includes determining, by one or more computing devices on-board an autonomous vehicle, an operational mode of the autonomous vehicle. The autonomous vehicle is configured to operate in at least a first operational mode in which a human driver is present in the autonomous vehicle and a second operational mode in which the human driver is not present in the autonomous vehicle. The method includes detecting a triggering event associated with the autonomous vehicle. The method includes determining actions to be performed by the autonomous vehicle in response to the triggering event based at least in part on the operational mode. The method includes providing one or more control signals to one or more of the systems on-board the autonomous vehicle to perform the one or more actions in response to the triggering event.
B60W 50/08 - Interaction entre le conducteur et le système d'aide à la conduite
B60W 10/18 - Commande conjuguée de sous-ensembles de véhicule, de fonction ou de type différents comprenant la commande des systèmes de freinage
B60W 10/20 - Commande conjuguée de sous-ensembles de véhicule, de fonction ou de type différents comprenant la commande des systèmes de direction
B60W 50/029 - Adaptation aux défaillances ou contournement par solutions alternatives, p.ex. en évitant l'utilisation de parties défaillantes
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
10.
Fault-Tolerant Control of an Autonomous Vehicle with Multiple Control Lanes
In one example embodiment, a computer-implemented method includes receiving data representing a motion plan of the autonomous vehicle via a plurality of control lanes configured to implement the motion plan to control a motion of the autonomous vehicle, the plurality of control lanes including at least a first control lane and a second control lane, and controlling the first control lane to implement the motion plan. The method includes detecting one or more faults associated with implementation of the motion plan by the first control lane or the second control lane, or in generation of the motion plan, and in response to one or more faults, controlling the first control lane or the second control lane to adjust the motion of the autonomous vehicle based at least in part on one or more fault reaction parameters associated with the one or more faults.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60W 50/029 - Adaptation aux défaillances ou contournement par solutions alternatives, p.ex. en évitant l'utilisation de parties défaillantes
B60W 50/023 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier pour préserver la sécurité en cas de défaillance du système d'aide à la conduite, p.ex. en diagnostiquant ou en palliant à un dysfonctionnement Élimination des défaillances en utilisant des éléments redondants
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p.ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
G01C 21/16 - Navigation; Instruments de navigation non prévus dans les groupes en utilisant des mesures de la vitesse ou de l'accélération exécutées à bord de l'objet navigant; Navigation à l'estime en intégrant l'accélération ou la vitesse, c. à d. navigation par inertie
A computing system can input first relative location embedding data into an interaction transformer model and receive, as an output of the interaction transformer model, motion forecast data for actors relative to a vehicle. The computing system can input the motion forecast data into a prediction model to receive respective trajectories for the actors for a current time step and respective projected trajectories for the actors for a subsequent time step. The computing system can generate second relative location embedding data based on the respective projected trajectories from the second time step. The computing system can produce second motion forecast data using the interaction transformer model based on the second relative location embedding. The computing system can determine second respective trajectories for the actors using the prediction model based on the second forecast data.
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
B60W 50/00 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier
G06N 3/044 - Réseaux récurrents, p.ex. réseaux de Hopfield
The present disclosure provides a sensor cleaning system that cleans one or more sensors of an autonomous vehicle. Each sensor can have one or more corresponding sensor cleaning units that are configured to clean such sensor using a fluid (e.g., a gas or a liquid). Thus, the sensor cleaning system can include both a gas cleaning system and a liquid cleaning system. According to one aspect, the sensor cleaning system can provide individualized cleaning of the autonomous vehicle sensors. According to another aspect, a liquid cleaning system can be pressurized or otherwise powered by the gas cleaning system or other gas system.
B60S 1/54 - Nettoyage des pare-brise, fenêtres ou dispositifs optiques utilisant un gaz, p.ex. air chaud
B60S 1/56 - Nettoyage des pare-brise, fenêtres ou dispositifs optiques spécialement adaptés pour nettoyer d'autres parties ou dispositifs que les fenêtres avant ou les pare-brise
14.
Automatically Adjustable Partition Wall for an Autonomous Vehicle
Systems and methods for automatically adjusting the interior cabin of an autonomous vehicle are provided. In one example embodiment, an autonomous vehicle can include a main body including a floor and a ceiling that at least partially define an interior cabin of the autonomous vehicle. The autonomous vehicle can include a partition wall that is movable within the interior cabin of the autonomous vehicle. The partition wall can extend between the floor to the ceiling of the main body. The autonomous vehicle can include a computing system configured to receive data indicative of one or more service assignments associated with the autonomous vehicle and to adjust a position of the partition wall within the interior cabin based at least in part on the one or more service assignments.
B60R 13/08 - Eléments d'isolation, p.ex. pour l'insonorisation
B60N 2/30 - Sièges indémontables susceptibles d'être gardés dans une position de non-utilisation, p.ex. strapontins
B60N 2/02 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable
G01C 21/34 - Recherche d'itinéraire; Guidage en matière d'itinéraire
G01G 19/12 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes pour incorporation dans des véhicules ayant des dispositifs électriques sensibles au poids
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06Q 10/0631 - Planification, affectation, distribution ou ordonnancement de ressources d’entreprises ou d’organisations
G06Q 30/0283 - Estimation ou détermination de prix
G06Q 50/28 - Logistique, p.ex. stockage, chargement, distribution ou expédition
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G08G 1/01 - Détection du mouvement du trafic pour le comptage ou la commande
15.
Systems and methods for interactive prediction and planning
Example aspects of the present disclosure describe determining, using a machine-learned model framework, a motion trajectory for an autonomous platform. The motion trajectory can be determined based at least in part on a plurality of costs based at least in part on a distribution of probabilities determined conditioned on the motion trajectory.
Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes obtaining, from an autonomy system, data indicative of a planned trajectory of the autonomous vehicle through a surrounding environment. The method includes determining a region of interest in the surrounding environment based at least in part on the planned trajectory. The method includes controlling one or more first sensors to obtain data indicative of the region of interest. The method includes identifying one or more objects in the region of interest, based at least in part on the data obtained by the one or more first sensors. The method includes controlling the autonomous vehicle based at least in part on the one or more objects identified in the region of interest.
Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
18.
Systems and Methods for Generating Synthetic Sensor Data via Machine Learning
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
Systems and methods for vehicle message signing are provided. A method includes obtaining, by a vehicle computing system of an autonomous vehicle, a computing system state associated with the vehicle computing system and a message from at least one remote process running a computing device remote from the vehicle computing system. The message is associated with an intended recipient process running on the vehicle computing system. The method includes determining an originating sender for the message. The originating sender is indicative of a remote process that generated the message. The method includes determining a routing action for the message based on a comparison of the originating sender and the computing system state. The routing action includes at least one of a discarding action or a forwarding action to the intended recipient process. The method includes performing the routing action for the message.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04W 4/44 - Services spécialement adaptés à des environnements, à des situations ou à des fins spécifiques pour les véhicules, p.ex. communication véhicule-piétons pour la communication entre véhicules et infrastructures, p.ex. véhicule à nuage ou véhicule à domicile
20.
Multi-Model Switching on a Collision Mitigation System
Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes receiving data indicative of an operating mode of the vehicle, wherein the vehicle is configured to operate in a plurality of operating modes. The method includes determining one or more response characteristics of the vehicle based at least in part on the operating mode of the vehicle, each response characteristic indicating how the vehicle responds to a potential collision. The method includes controlling the vehicle based at least in part on the one or more response characteristics.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60W 10/184 - Commande conjuguée de sous-ensembles de véhicule, de fonction ou de type différents comprenant la commande des systèmes de freinage avec des freins de roues
B60W 10/20 - Commande conjuguée de sous-ensembles de véhicule, de fonction ou de type différents comprenant la commande des systèmes de direction
B60W 30/08 - Anticipation ou prévention de collision probable ou imminente
G01S 13/86 - Combinaisons de systèmes radar avec des systèmes autres que radar, p.ex. sonar, chercheur de direction
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B60W 50/08 - Interaction entre le conducteur et le système d'aide à la conduite
B60W 30/095 - Prévision du trajet ou de la probabilité de collision
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
B60W 50/14 - Moyens d'information du conducteur, pour l'avertir ou provoquer son intervention
21.
End-To-End Interpretable Motion Planner for Autonomous Vehicles
Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided. An autonomous vehicle may include or access a machine-learned motion planning model including a backbone network configured to generate a cost volume including data indicative of a cost associated with future locations of the autonomous vehicle. The cost volume can be generated from raw sensor data as part of motion planning for the autonomous vehicle. The backbone network can generate intermediate representations associated with object detections and objection predictions. The motion planning model can include a trajectory generator configured to evaluate one or more potential trajectories for the autonomous vehicle and to select a target trajectory based at least in part on the cost volume generate by the backbone network.
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G01C 21/32 - Structuration ou formatage de données cartographiques
G01C 21/34 - Recherche d'itinéraire; Guidage en matière d'itinéraire
A LIDAR sensor for an autonomous vehicle (AV) can include one or more lasers outputting one or more laser beams, one or more non-mechanical optical components to (i) receive the one or more laser beams, (ii) configure a field of view of the LIDAR sensor, and (iii) output modulated frequencies from the one or more laser beams, and one or more photodetectors to detect return signals based on the outputted modulated frequencies from the one or more laser beams.
G01S 17/42 - Mesure simultanée de la distance et d'autres coordonnées
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
G01S 7/481 - Caractéristiques de structure, p.ex. agencements d'éléments optiques
G02B 26/12 - Systèmes de balayage utilisant des miroirs à facettes multiples
23.
Jointly Learnable Behavior and Trajectory Planning for Autonomous Vehicles
Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
24.
Systems and Methods for Actor Motion Forecasting within a Surrounding Environment of an Autonomous Vehicle
Systems and methods are provided for forecasting the motion of actors within a surrounding environment of an autonomous platform. For example, a computing system of an autonomous platform can use machine-learned model(s) to generate actor-specific graphs with past motions of actors and the local map topology. The computing system can project the actor-specific graphs of all actors to a global graph. The global graph can allow the computing system to determine which actors may interact with one another by propagating information over the global graph. The computing system can distribute the interactions determined using the global graph to the individual actor-specific graphs. The computing system can then predict a motion trajectory for an actor based on the associated actor-specific graph, which captures the actor-to-actor interactions and actor-to-map relations.
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
Systems and methods for controlling the motion of an autonomous are provided. In one example embodiment, a computer-implemented method includes obtaining data associated with an object within a surrounding environment of an autonomous vehicle. The data associated with the object is indicative of a predicted motion trajectory of the object. The method includes determining a vehicle action sequence based at least in part on the predicted motion trajectory of the object. The vehicle action sequence is indicative of a plurality of vehicle actions for the autonomous vehicle at a plurality of respective time steps associated with the predicted motion trajectory. The method includes determining a motion plan for the autonomous vehicle based at least in part on the vehicle action sequence. The method includes causing the autonomous vehicle to initiate motion control in accordance with at least a portion of the motion plan.
B60W 30/095 - Prévision du trajet ou de la probabilité de collision
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B60W 50/00 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier
27.
Multi-Channel Light Detection and Ranging (LIDAR) Unit Having a Telecentric Lens Assembly and Single Circuit Board for Emitters and Detectors
A LIDAR unit includes a housing defining a cavity. The LIDAR unit further include a plurality of emitters disposed on a circuit board within the cavity. Each of the emitters emits a laser beam along a transmit path. The LIDAR system further includes a first telecentric lens assembly positioned within the cavity and along the transmit path such that the laser beam emitted from each of the plurality of emitters passes through the first telecentric lens assembly. The LIDAR further includes a second telecentric lens assembly positioned within the cavity and along a receive path such that a plurality of reflected laser beams entering the cavity pass through the second telecentric lens assembly. The first telecentric lens assembly and the second telecentric lens assembly each include a field flattening lens and at least one other lens.
Systems and methods for facilitating communication with autonomous vehicles are provided. In one example embodiment, a computing system can obtain a first type of sensor data (e.g., camera image data) associated with a surrounding environment of an autonomous vehicle and/or a second type of sensor data (e.g., LIDAR data) associated with the surrounding environment of the autonomous vehicle. The computing system can generate overhead image data indicative of at least a portion of the surrounding environment of the autonomous vehicle based at least in part on the first and/or second types of sensor data. The computing system can determine one or more lane boundaries within the surrounding environment of the autonomous vehicle based at least in part on the overhead image data indicative of at least the portion of the surrounding environment of the autonomous vehicle and a machine-learned lane boundary detection model.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G01S 17/86 - Combinaisons de systèmes lidar avec des systèmes autres que lidar, radar ou sonar, p.ex. avec des goniomètres
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06F 18/21 - Conception ou mise en place de systèmes ou de techniques; Extraction de caractéristiques dans l'espace des caractéristiques; Séparation aveugle de sources
Systems, methods, tangible non-transitory computer-readable media, and devices associated with the operation of a vehicle are provided. For example a vehicle computing system can receive occupancy data that includes information associated with occupancy of a vehicle that includes seats. One or more states of the vehicle can be determined. The states of the vehicle can include a disposition of any object that is within the vehicle. Further, a configuration of the seats in the vehicle can be determined based on the occupancy data and the states of the vehicle. The configuration can include a disposition of the seats inside the vehicle. Furthermore, at least one of the seats can be adjusted based on the configuration that was determined.
B60N 2/01 - Agencement des sièges les uns par rapport aux autres
B60N 2/02 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable
B60N 3/00 - Aménagements ou adaptations d'autres accessoires pour passagers, non prévus ailleurs
B60N 2/06 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable le siège entier étant mobile coulissant
B60N 2/00 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules
A47C 3/04 - Chaises s'emboîtant les unes dans les autres
B60N 2/14 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable le siège entier étant mobile rotatif, p.ex. pour faciliter l'accès
Systems, methods, tangible non-transitory computer-readable media, and devices for associating objects are provided. For example, the disclosed technology can receive sensor data associated with the detection of objects over time. An association dataset can be generated and can include information associated with object detections of the objects at a most recent time interval and object tracks of the objects at time intervals in the past. A subset of the association dataset including the object detections that satisfy some association subset criteria can be determined. Association scores for the object detections in the subset of the association dataset can be determined. Further, the object detections can be associated with the object tracks based on the association scores for each of the object detections in the subset of the association dataset that satisfy some association criteria.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 20/30 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les albums, les collections ou les contenus partagés, p.ex. des photos ou des vidéos issus des réseaux sociaux
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
31.
Light Detection and Ranging (LIDAR) System Having a Polarizing Beam Splitter
A LIDAR system includes a plurality of LIDAR units. Each of the LIDAR units includes a housing defining a cavity. Each of the LIDAR units further includes a plurality of emitters disposed within the cavity. Each of the plurality of emitters is configured to emit a laser beam. The LIDAR system includes a rotating mirror and a retarder. The retarder is configurable in at least a first mode and a second mode to control a polarization state of a plurality of laser beams emitted from each of the plurality of LIDAR units. The LIDAR system includes a polarizing beam splitter positioned relative to the retarder such that the polarizing beam splitter receives a plurality of laser beams exiting the retarder. The polarizing beam is configured to transmit or reflect the plurality of laser beams exiting the retarder based on the polarization state of the laser beams exiting the retarder.
G01S 7/499 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe utilisant des effets de polarisation
G01S 7/481 - Caractéristiques de structure, p.ex. agencements d'éléments optiques
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
B60W 30/095 - Prévision du trajet ou de la probabilité de collision
B60W 30/09 - Entreprenant une action automatiquement pour éviter la collision, p.ex. en freinant ou tournant
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
G02B 6/27 - Moyens de couplage optique avec des moyens de sélection et de réglage de la polarisation
32.
Systems and Methods for Seat Reconfiguration for Autonomous Vehicles
Systems and methods for reconfiguring seats of an autonomous vehicle is provided. The method includes obtaining service request data that includes a service selection and request characteristics. The method includes obtaining data describing an initial seat configuration for each of a plurality of seats of an autonomous vehicle assigned to the service request. The initial seat configuration can include a seat position and a seat orientation for each of the plurality of seats. The method includes generating, based on the initial cabin configuration and the service request data, seat adjustment instructions configured to adjust the initial seat configuration of at least one of the seats. The method includes providing the seat adjustment instructions to the autonomous vehicle assigned to the service request.
B60N 2/02 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable
Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/776 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source Évaluation des performances
An autonomous vehicle computing system can include a primary perception system configured to receive a plurality of sensor data points as input generate primary perception data representing a plurality of classifiable objects and a plurality of paths representing tracked motion of the plurality of classifiable objects. The autonomous vehicle computing system can include a secondary perception system configured to receive the plurality of sensor data points as input, cluster a subset of the plurality of sensor data points of the sensor data to generate one or more sensor data point clusters representing one or more unclassifiable objects that are not classifiable by the primary perception system, and generate secondary path data representing tracked motion of the one or more unclassifiable objects. The autonomous vehicle computing system can generate fused perception data based on the primary perception data and the one or more unclassifiable objects.
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G01S 17/66 - Systèmes de poursuite utilisant d'autres ondes électromagnétiques que les ondes radio
G01S 17/58 - Systèmes de détermination de la vitesse ou de la trajectoire; Systèmes de détermination du sens d'un mouvement
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes obtaining sensor data indicative of a surrounding environment of the autonomous vehicle, the surrounding environment including one or more occluded sensor zones. The method includes determining that a first occluded sensor zone of the occluded sensor zone(s) is occupied based at least in part on the sensor data. The method includes, in response to determining that the first occluded sensor zone is occupied, controlling the autonomous vehicle to travel clear of the first occluded sensor zone.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
36.
Systems and Methods for Generating Motion Forecast Data for Actors with Respect to an Autonomous Vehicle and Training a Machine Learned Model for the Same
Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
Techniques for improving the performance of an autonomous vehicle (AV) are described herein. A system can determine a plan for the AV in a driving scenario that optimizes an initial cost function of a control algorithm of the AV. The system can obtain data describing an observed human driving path in the driving scenario. Additionally, the system can determine for each cost dimension in the plurality of cost dimensions, a quantity that compares the estimated cost to the observed cost of the observed human driving path. Moreover, the system can determine a function of a sum of the quantities determined for each cost dimension in the plurality of cost dimensions. Subsequently, the system can use an optimization algorithm to adjust one or more weights of the plurality of weights applied to the plurality of cost dimensions to optimize the function of the sum of the quantities.
Aspects of the present disclosure involve systems, methods, and devices for mitigating Lidar cross-talk. Consistent with some embodiments, a Lidar system is configured to include one or more noise source detectors that detect noise signals that may produce noise in return signals received at the Lidar system. A noise source detector comprises a light sensor to receive a noise signal produced by a noise source and a timing circuit to provide a timing signal indicative of a direction of the noise source relative to an autonomous vehicle on which the Lidar system is mounted. A noise source may be an external Lidar system or a surface in the surrounding environment that is reflecting light signals such as those emitted by an external Lidar system.
G01S 7/4865 - Mesure du temps de retard, p.ex. mesure du temps de vol ou de l'heure d'arrivée ou détermination de la position exacte d'un pic
G01S 7/4863 - Réseaux des détecteurs, p.ex. portes de transfert de charge
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 7/495 - Contre-mesures ou anti-contre-mesures
39.
Passenger Seats and Doors for an Autonomous Vehicle
An autonomous can include one or more configurable passenger seats to accommodate a plurality of different seating configurations. For instance, the one or more passenger seats can include a passenger seat defining a seating orientation. The passenger seat can be configurable in a first configuration in which the seating orientation is directed towards a forward end of the autonomous vehicle and a second configuration in which the seating orientation is directed towards a rear end of the autonomous vehicle. The passenger seat can include a seatback rotatable about a pivot point on a base of the passenger seat to switch between the first configuration and the second configuration. Alternatively, or additionally, the autonomous vehicle can include a door assembly pivotably fixed to a vehicle body of the autonomous vehicle such that a swept path of the door assembly when moving between an open position and a closed position is reduced.
B60N 2/02 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable
B60N 2/832 - Appuis-tête mobiles ou réglables coulissant verticalement mobiles vers une position de non-utilisation ou de rangement
B60N 2/00 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules
B60N 2/90 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules - Détails ou éléments non prévus ailleurs
A planar-beam, light detection and ranging (PLADAR) system can include a laser scanner that emits a planar-beam, and a detector array that detects reflected light from the planar beam.
G01S 7/481 - Caractéristiques de structure, p.ex. agencements d'éléments optiques
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
G01C 21/34 - Recherche d'itinéraire; Guidage en matière d'itinéraire
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
41.
Systems and Methods for Remote Status Detection of Autonomous Vehicles
Systems and methods are provided for remotely detecting a status associated with an autonomous vehicle and generating control actions in response to such detections. In one example, a computing system can access a third-party communication associated with an autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a predetermined identifier associated with the autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a status associated with the autonomous vehicle, and transmit one or more control messages to the autonomous vehicle based at least in part on the predetermined identifier and the status associated with the autonomous vehicle.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G08G 1/017 - Détection du mouvement du trafic pour le comptage ou la commande par identification des véhicules
G07C 5/00 - Enregistrement ou indication du fonctionnement de véhicules
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
Example aspects of the present disclosure describe a scene generator for simulating scenes in an environment. For example, snapshots of simulated traffic scenes can be generated by sampling a joint probability distribution trained on real-world traffic scenes. In some implementations, samples of the joint probability distribution can be obtained by sampling a plurality of factorized probability distributions for a plurality of objects for sequential insertion into the scene.
G08G 1/01 - Détection du mouvement du trafic pour le comptage ou la commande
G06V 20/54 - Trafic, p.ex. de voitures sur la route, de trains ou de bateaux
G06F 18/2415 - Techniques de classification relatives au modèle de classification, p.ex. approches paramétriques ou non paramétriques basées sur des modèles paramétriques ou probabilistes, p.ex. basées sur un rapport de vraisemblance ou un taux de faux positifs par rapport à un taux de faux négatifs
43.
Systems and Methods for Onboard Vehicle Certificate Distribution
Systems and methods for onboard vehicle certificate distribution are provided. A system can include a plurality of devices including a master device for authenticating processes and one or more requesting devices. The master device can include a master host security service configured to authenticate the one or more processes of the system. The master host security service can run a certificate authority to generate a root certificate and a private root key corresponding to the root certificate. A respective host security service can receive a request for a process manifest for a requesting process of a respective device from a respective orchestration service. The respective host security service can generate the process manifest for the requesting process and provide the process manifest to the requesting process. The requesting process can use the process manifest to communicate with the certificate authority to obtain an operational certificate based on the root certificate.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
B60R 25/24 - Moyens pour enclencher ou arrêter le système antivol par des éléments d’identification électroniques comportant un code non mémorisé par l’utilisateur
44.
LIDAR Sensor Assembly Including Joint Coupling Features
A light detection and ranging (LIDAR) sensor assembly can comprise an optics assembly that includes a LIDAR sensor and a set of dovetail joint inserts. The LIDAR sensor assembly can further include a frame comprising a set of dovetail joint septums coupled to the set of dovetail joint inserts of the optics assembly.
An on-board computing system for a vehicle is configured to generate and selectively present a set of autonomous-switching directions within a navigation user interface for the operator of the vehicle. The autonomous-switching directions can inform the operator regarding changes to the vehicle's mode of autonomous operation. The on-board computing system can generate the set of autonomy-switching directions based on the vehicle's route and other information associated with the route, such as autonomous operation permissions (AOPs) for route segments that comprise the route. The on-board computing device can selectively present the autonomy-switching directions based on locations associated with anticipated changes in autonomous operations determined for the route of the vehicle, the vehicle's location, and the vehicle's speed. In addition, the on-board computing device is further configured to present audio alerts associated with the autonomy-switching directions to the operator of the vehicle.
G01C 21/36 - Dispositions d'entrée/sortie pour des calculateurs embarqués
B60W 30/00 - Fonctions des systèmes d'aide à la conduite des véhicules routiers non liées à la commande d'un sous-ensemble particulier, p.ex. de systèmes comportant la commande conjuguée de plusieurs sous-ensembles du véhicule
46.
Autonomous vehicle with independent auxiliary control units
An autonomous vehicle which includes multiple independent control systems that provide redundancy as to specific and critical safety situations which may be encountered when the autonomous vehicle is in operation.
B60W 30/08 - Anticipation ou prévention de collision probable ou imminente
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60T 8/1755 - Régulation des freins spécialement adaptée pour la commande de la stabilité du véhicule, p.ex. en tenant compte du taux d'embardée ou de l'accélération transversale dans une courbe
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B60W 50/02 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier pour préserver la sécurité en cas de défaillance du système d'aide à la conduite, p.ex. en diagnostiquant ou en palliant à un dysfonctionnement
47.
Systems and methods for a moveable cover panel of an autonomous vehicle
Systems and methods for a moveable cover panel of an autonomous vehicle is provided. A vehicle can include a front panel disposed proximate to the front end of the passenger compartment, a vehicle motion control device located at the front panel, and a cover panel located at the front panel. The cover panel moveable relative to the front panel between an isolating position and an exposing position. The cover panel can isolate the vehicle motion control device from the passenger compartment when in the isolating position and expose the vehicle motion control device to the passenger compartment when in the exposing position. A method can include obtaining vehicle data identifying an operational mode, state, and/or status of the vehicle, determining a first position of the cover panel, and initiating a positional change for the cover panel based on the vehicle data and the first position.
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
B60R 21/205 - Dispositions pour ranger les éléments gonflables non utilisés ou à l'état dégonflé; Agencement ou montage des composants ou modules des coussins gonflables dans le tableau de bord
B62D 1/183 - Colonnes de direction susceptibles de déformation ou réglables, p.ex. inclinables réglables entre une position d'utilisation et une position de non utilisation, p.ex. pour améliorer l'accès
48.
Continuous convolution and fusion in neural networks
Systems and methods are provided for machine-learned models including convolutional neural networks that generate predictions using continuous convolution techniques. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can perform, with a machine-learned convolutional neural network, one or more convolutions over input data using a continuous filter relative to a support domain associated with the input data, and receive a prediction from the machine-learned convolutional neural network. A machine-learned convolutional neural network in some examples includes at least one continuous convolution layer configured to perform convolutions over input data with a parametric continuous kernel.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
49.
Automatic Robotically Steered Sensor for Targeted High Performance Perception and Vehicle Control
Disclosed are methods, systems, and non-transitory computer readable media that control an autonomous vehicle via at least two sensors. One aspect includes capturing an image of a scene ahead of the vehicle with a first sensor, identifying an object in the scene at a confidence level based on the image, determining the confidence level of the identifying is below a threshold, in response to the confidence level being below the threshold, directing a second sensor having a field of view smaller than the first sensor to generate a second image including a location of the identified object, further identifying the object in the scene based on the second image, controlling the vehicle based on the further identification of the object.
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
G08G 1/04 - Détection du mouvement du trafic pour le comptage ou la commande utilisant des détecteurs optiques ou ultrasonores
G08G 1/015 - Détection du mouvement du trafic pour le comptage ou la commande avec des dispositions pour distinguer différents types de véhicules, p.ex. pour distinguer les automobiles des cycles
G08G 1/01 - Détection du mouvement du trafic pour le comptage ou la commande
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
50.
Discrete Decision Architecture for Motion Planning System of an Autonomous Vehicle
The present disclosure provides autonomous vehicle systems and methods that include or otherwise leverage a motion planning system that generates constraints as part of determining a motion plan for an autonomous vehicle (AV). In particular, a scenario generator within a motion planning system can generate constraints based on where objects of interest are predicted to be relative to an autonomous vehicle. A constraint solver can identify navigation decisions for each of the constraints that provide a consistent solution across all constraints. The solution provided by the constraint solver can be in the form of a trajectory path determined relative to constraint areas for all objects of interest. The trajectory path represents a set of navigation decisions such that a navigation decision relative to one constraint doesn’t sacrifice an ability to satisfy a different navigation decision relative to one or more other constraints.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60W 30/095 - Prévision du trajet ou de la probabilité de collision
B60W 30/12 - Maintien de la trajectoire dans une voie de circulation
B60W 50/00 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier
B60W 30/16 - Contrôle de la distance entre les véhicules, p.ex. pour maintenir la distance avec le véhicule qui précède
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
Systems and methods for basis path generation are provided. In particular, a computing system can obtain a target nominal path. The computing system can determine a current pose for an autonomous vehicle. The computing system can determine, based at least in part on the current pose of the autonomous vehicle and the target nominal path, a lane change region. The computing system can determine one or more merge points on the target nominal path. The computing system can, for each respective merge point in the one or more merge points, generate a candidate basis path from the current pose of the autonomous vehicle to the respective merge point. The computing system can generate a suitability classification for each candidate basis path. The computing system can select one or more candidate basis paths based on the suitability classification for each respective candidate basis path in the plurality of candidate basis paths.
The present disclosure provides systems and methods to obtain feedback descriptive of autonomous vehicle failures. In particular, the systems and methods of the present disclosure can detect that a vehicle failure event occurred at an autonomous vehicle and, in response, provide an interactive user interface that enables a human located within the autonomous vehicle to enter feedback that describes the vehicle failure event. Thus, the systems and methods of the present disclosure can actively prompt and/or enable entry of feedback in response to a particular instance of a vehicle failure event, thereby enabling improved and streamlined collection of information about autonomous vehicle failures.
G06F 17/00 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des fonctions spécifiques
G07C 5/08 - Enregistrement ou indication de données de marche autres que le temps de circulation, de fonctionnement, d'arrêt ou d'attente, avec ou sans enregistrement des temps de circulation, de fonctionnement, d'arrêt ou d'attente
B60W 50/14 - Moyens d'information du conducteur, pour l'avertir ou provoquer son intervention
B60K 35/00 - Agencement ou adaptations des instruments
G01C 21/36 - Dispositions d'entrée/sortie pour des calculateurs embarqués
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle. For example, multi-sensor fusion can be implemented via continuous convolutions to fuse image data samples and LIDAR feature maps at different levels of resolution.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
54.
Systems and Methods for Vehicle Spatial Path Sampling
Systems and methods for vehicle spatial path sampling are provided. The method includes obtaining an initial travel path for an autonomous vehicle from a first location to a second location and vehicle configuration data indicative of one or more physical constraints of the autonomous vehicle. The method includes determining one or more secondary travel paths for the autonomous vehicle from the first location to the second location based on the initial travel path and the vehicle configuration data. The method includes generating a spatial envelope based on the one or more secondary travel paths that indicates a plurality of lateral offsets from the initial travel path. And, the method includes generating a plurality of trajectories for the autonomous vehicle to travel from the first location to the second location such that each of the plurality of trajectories include one or more lateral offsets identified by the spatial envelope.
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
Aspects of the present disclosure involve systems, methods, and devices for fault detection in a Lidar system. A fault detection system obtains incoming Lidar data output by a Lidar system during operation of an AV system. The incoming Lidar data includes one or more data points corresponding to a fault detection target on an exterior of a vehicle of the AV system. The fault detection system accesses historical Lidar data that is based on data previously output by the Lidar system. The historical Lidar data corresponds to the fault detection target. The fault detection system performs a comparison of the incoming Lidar data with the historical Lidar data to identify any differences between the two sets of data. The fault detection system detects a fault condition occurring at the Lidar system based on the comparison.
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
56.
Systems and methods for remote status detection of autonomous vehicles
Systems and methods are provided for remotely detecting a status associated with an autonomous vehicle and generating control actions in response to such detections. In one example, a computing system can access a third-party communication associated with an autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a predetermined identifier associated with the autonomous vehicle. The computing system can determine, based at least in part on the third-party communication, a status associated with the autonomous vehicle, and transmit one or more control messages to the autonomous vehicle based at least in part on the predetermined identifier and the status associated with the autonomous vehicle.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G08G 1/017 - Détection du mouvement du trafic pour le comptage ou la commande par identification des véhicules
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G07C 5/00 - Enregistrement ou indication du fonctionnement de véhicules
57.
System and methods for controlling state transitions using a vehicle controller
The present disclosure is directed to controlling state transitions in an autonomous vehicle. In particular, a computing system can initiate the autonomous vehicle into a no-authorization state upon startup. The computing system can receive an authorization request. The computing system determines whether the authorization request includes a request to enter the first or second mode of operations, wherein the first mode of operations is associated with the autonomous vehicle being operated without a human operator and the second mode of operations is associated with the autonomous vehicle being operable by a human operator. The computing system can transition the autonomous vehicle from the no-authorization state into a standby state in response to determining the authorization request includes a request to enter the first mode of operations or into a manual-controlled state in response to determining the authorization request is a request to enter the second mode of operations.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
58.
Driving surface friction estimations using vehicle steering
Systems and methods are provided for generating data indicative of a friction associated with a driving surface, and for using the friction data in association with one or more vehicles. In one example, a computing system can detect a stop associated with a vehicle and initiate a steering action of the vehicle during the stop. The steering action is associated with movement of at least one tire of the vehicle relative to a driving surface. The computing system can obtain operational data associated with the steering action during the stop of the vehicle. The computing system can determine a friction associated with the driving surface based at least in part on the operational data associated with the steering action. The computing system can generate data indicative of the friction associated with the driving surface.
B60T 8/1763 - Régulation des freins spécialement adaptée pour la prévention du dérapage excessif des roues pendant la décélération, p.ex. ABS en fonction du coefficient de frottement entre les roues et la surface du sol
B60T 8/171 - Détection des paramètres utilisés pour la régulation; Mesure des valeurs utilisées pour la régulation
B60T 8/1755 - Régulation des freins spécialement adaptée pour la commande de la stabilité du véhicule, p.ex. en tenant compte du taux d'embardée ou de l'accélération transversale dans une courbe
B60W 40/068 - Calcul ou estimation des paramètres de fonctionnement pour les systèmes d'aide à la conduite de véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier liés aux conditions ambiantes liés à l'état de la route coefficient de friction de la route
B60W 50/00 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
Systems and methods of the present disclosure provide an improved approach for open-set instance segmentation by identifying both known and unknown instances in an environment. For example, a method can include receiving sensor point cloud input data including a plurality of three-dimensional points. The method can include determining a feature embedding and at least one of an instance embedding, class embedding, and/or background embedding for each of the plurality of three-dimensional points. The method can include determining a first subset of points associated with one or more known instances within the environment based on the class embedding and the background embedding associated with each point in the plurality of points. The method can include determining a second subset of points associated with one or more unknown instances within the environment based on the first subset of points. The method can include segmenting the input data into known and unknown instances.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06N 3/084 - Rétropropagation, p.ex. suivant l’algorithme du gradient
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
60.
Systems and methods for generating synthetic light detection and ranging data via machine learning
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned geometry model can predict one or more adjusted depths for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
Systems, methods, tangible non-transitory computer-readable media, and devices associated with motion flow estimation are provided. For example, scene data including representations of an environment over a first set of time intervals can be accessed. Extracted visual cues can be generated based on the representations and machine-learned feature extraction models. At least one of the machine-learned feature extraction models can be configured to generate a portion of the extracted visual cues based on a first set of the representations of the environment from a first perspective and a second set of the representations of the environment from a second perspective. The extracted visual cues can be encoded using energy functions. Three-dimensional motion estimates of object instances at time intervals subsequent to the first set of time intervals can be determined based on the energy functions and machine-learned inference models.
Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g., fusing features derived from image data, light detection and ranging (LIDAR) data, and/or other sensor modalities) at both the point-wise and region of interest (ROI)-wise level, resulting in fully fused feature representations.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
Systems and methods for detecting a surprise or unexpected movement of an actor with respect to an autonomous vehicle are provided. An example computer-implemented method can include, for a first compute cycle, obtaining motion forecast data based on first sensor data collected with respect to an actor relative to an autonomous vehicle; and determining, based on the motion forecast data, failsafe region data representing an unexpected path or area where a likelihood of the actor following the unexpected path or entering the unexpected area is below a threshold. For a second compute cycle after the first compute cycle, the method can include obtaining second sensor data; determining, based on the second sensor data and the failsafe region data, that the actor has followed the unexpected path or entered the unexpected area; and in response to such determination, determining a deviation for controlling a movement of the autonomous vehicle.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
64.
Providing actionable uncertainties in autonomous vehicles
Systems and methods are provided for detecting objects of interest. A computing system can input sensor data to one or more first machine-learned models associated with detecting objects external to an autonomous vehicle. The computing system can obtain as an output of the first machine-learned models, data indicative of one or more detected objects. The computing system can determine data indicative of at least one uncertainty associated with the one or more detected objects and input the data indicative of the one or more detected objects and the data indicative of the at least one uncertainty to one or more second machine-learned models. The computing system can obtain as an output of the second machine-learned models, data indicative of at least one prediction associated with the one or more detected objects. The at least one prediction can be based at least in part on the detected objects and the uncertainty.
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
Systems and methods for training machine-learned models are provided. A method can include receiving a rasterized image associated with a training object and generating a predicted trajectory of the training object by inputting the rasterized image into a first machine-learned model. The method can include converting the predicted trajectory into a rasterized trajectory that spatially corresponds to the rasterized image. The method can include utilizing a second machine-learned model to determine an accuracy of the predicted trajectory based on the rasterized trajectory. The method can include determining an overall loss for the first machine-learned model based on the accuracy of the predictive trajectory as determined by the second machine-learned model. The method can include training the first machine-learned model by minimizing the overall loss for the first machine-learned model.
Systems and methods are directed to automated delivery systems. In one example, a vehicle is provided including a drive system, a passenger cabin; and a delivery service pod provided relative to the passenger cabin. The delivery service pod includes an access unit configured to allow for loading and unloading of a plurality of delivery crates into the delivery service pod. The delivery service pod further includes a conveyor unit comprising multiple delivery crate holding positions, the delivery crate holding positions being defined by neighboring sidewalls spaced apart within the delivery service pod such that a respective delivery crate of the plurality of delivery crates can be positioned between neighboring sidewalls, wherein the conveyor unit is configured to be rotated to align each of the delivery crate holding positions with the access unit.
B60P 3/00 - Véhicules adaptés pour transporter, porter ou comporter des charges ou des objets spéciaux
A47L 7/00 - Aspirateurs adaptés à d'autres emplois; Tables avec orifices d'aspiration en vue du nettoyage; Récipients pour articles de nettoyage par aspiration; Aspirateurs conçus pour le nettoyage des brosses; Aspirateurs conçus pour l'absorption de liquides
B60S 1/64 - Autres accessoires sur véhicules pour le nettoyage pour nettoyer les intérieurs de véhicule, p.ex. aspirateurs incorporés
B60P 1/36 - Véhicules destinés principalement au transport des charges et modifiés pour faciliter le chargement, la fixation de la charge ou son déchargement utilisant des chaînes sans fin ou des courroies
67.
Multiple stage image based object detection and recognition
Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06F 18/241 - Techniques de classification relatives au modèle de classification, p.ex. approches paramétriques ou non paramétriques
G06F 18/243 - Techniques de classification relatives au nombre de classes
G06V 10/28 - Quantification de l’image, p.ex. seuillage par histogramme visant à discriminer entre les formes d’arrière-plan et d’avant-plan
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
Aspects of the present disclosure involve a vehicle computer system comprising a computer-readable storage medium storing a set of instructions, and a method for online light detection and ranging (Lidar) intensity normalization. Consistent with some embodiments, the method may include accumulating point data output by a channel of a Lidar unit during operation of an autonomous or semi-autonomous vehicle. The accumulated point data includes raw intensity values that correspond to a particular surface type. The method further includes calculating a median intensity value based on the raw intensity values and generating an intensity normalization multiplier for the channel based on the median intensity value. The intensity normalization multiplier, when applied to the median intensity value, results in a reflectivity value that corresponds to the particular surface type. The method further includes applying the intensity normalization multiplier to the point data output by the channel to produce normalized intensity values.
G01S 17/87 - Combinaisons de systèmes utilisant des ondes électromagnétiques autres que les ondes radio
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/06 - Systèmes déterminant les données relatives à la position d'une cible
Systems and methods for power and thermal management of autonomous vehicles are provided. In one example embodiment, a computing system includes processor(s) and one or more tangible, non-transitory, computer readable media that collectively store instructions that when executed by the processor(s) cause the computing system to perform operations. The operations include obtaining data associated with an autonomous vehicle. The operations include identifying one or more vehicle parameters associated with the autonomous vehicle based at least in part on the data associated with the autonomous vehicle. The operations include determining a modification to one or more operating characteristics of one or more systems onboard the autonomous vehicle based at least in part on the one or more vehicle parameters. The operations include controlling a heat generation of at least a portion of the autonomous vehicle via implementation of the modification of the operating characteristic(s) of the system(s) onboard the autonomous vehicle.
G08G 1/0967 - Systèmes impliquant la transmission d'informations pour les grands axes de circulation, p.ex. conditions météorologiques, limites de vitesse
B60H 1/00 - Dispositifs de chauffage, de refroidissement ou de ventilation
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
Systems, methods, tangible non-transitory computer-readable media, and devices associated with trajectory prediction are provided. For example, trajectory data and goal path data can be accessed. The trajectory data can be associated with an object's predicted trajectory. The predicted trajectory can include waypoints associated with waypoint position uncertainty distributions that can be based on an expectation maximization technique. The goal path data can be associated with a goal path and include locations the object is predicted to travel. Solution waypoints for the object can be determined based on application of optimization techniques to the waypoints and waypoint position uncertainty distributions. The optimization techniques can include operations to maximize the probability of each of the solution waypoints. Stitched trajectory data can be generated based on the solution waypoints. The stitched trajectory data can be associated with portions of the solution waypoints and the goal path.
Systems and methods are described that probabilistically predict dynamic object behavior. In particular, in contrast to existing systems which attempt to predict object trajectories directly (e.g., directly predict a specific sequence of well-defined states), a probabilistic approach is instead leveraged that predicts discrete probability distributions over object state at each of a plurality of time steps. In one example, systems and methods predict future states of dynamic objects (e.g., pedestrians) such that an autonomous vehicle can plan safer actions/movement.
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
B60W 30/09 - Entreprenant une action automatiquement pour éviter la collision, p.ex. en freinant ou tournant
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
Disclosed herein are methods and systems for performing instance segmentation that can provide improved estimation of object boundaries. Implementations can include a machine-learned segmentation model trained to estimate an initial object boundary based on a truncated signed distance function (TSDF) generated by the model. The model can also generate outputs for optimizing the TSDF over a series of iterations to produce a final TSDF that can be used to determine the segmentation mask.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06V 10/77 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source
G06F 18/213 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
Systems, methods, tangible non-transitory computer-readable media, and devices associated with radar validation and calibration are provided. For example, target positions for targets can be determined based on imaging devices. The targets can be located at respective predetermined positions relative to the imaging devices. Radar detections of the targets can be generated based on radar devices. The radar devices can be located at a predetermined position relative to the imaging devices. Filtered radar detections can be generated based on performance of filtering operations on the radar detections. A detection error can be determined for the radar devices based on calibration operations performed using the filtered radar detections and the target positions determined based on the one or more imaging devices. Furthermore, the radar devices can be calibrated based on the detection error.
G01S 7/292 - Récepteurs avec extraction de signaux d'échos recherchés
G01S 13/42 - Mesure simultanée de la distance et d'autres coordonnées
G01S 13/86 - Combinaisons de systèmes radar avec des systèmes autres que radar, p.ex. sonar, chercheur de direction
G01S 5/14 - Localisation par coordination de plusieurs déterminations de direction ou de ligne de position; Localisation par coordination de plusieurs déterminations de distance utilisant les ondes radioélectriques déterminant des distances absolues à partir de plusieurs points espacés d'emplacement connu
Systems and methods for determining a location based on image data are provided. A method can include receiving, by a computing system, a query image depicting a surrounding environment of a vehicle. The query image can be input into a machine-learned image embedding model and a machine-learned feature extraction model to obtain a query embedding and a query feature representation, respectively. The method can include identifying a subset of candidate embeddings that have embeddings similar to the query embedding. The method can include obtaining a respective feature representation for each image associated with the subset of candidate embeddings. The method can include determining a set of relative displacements between each image associated with the subset of candidate embeddings and the query image and determining a localized state of a vehicle based at least in part on the set of relative displacements.
Systems, methods, tangible non-transitory computer-readable media, and devices associated with the operation of a vehicle are provided. For example a vehicle computing system can receive occupancy data that includes information associated with occupancy of a vehicle that includes seats. One or more states of the vehicle can be determined. The states of the vehicle can include a disposition of any object that is within the vehicle. Further, a configuration of the seats in the vehicle can be determined based on the occupancy data and the states of the vehicle. The configuration can include a disposition of the seats inside the vehicle. Furthermore, at least one of the seats can be adjusted based on the configuration that was determined.
B60N 2/01 - Agencement des sièges les uns par rapport aux autres
B60N 2/02 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable
B60N 2/06 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable le siège entier étant mobile coulissant
B60N 3/00 - Aménagements ou adaptations d'autres accessoires pour passagers, non prévus ailleurs
A47C 3/04 - Chaises s'emboîtant les unes dans les autres
B60N 2/00 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules
B60N 2/14 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules le siège ou l'une de ses parties étant mobile, p.ex. réglable le siège entier étant mobile rotatif, p.ex. pour faciliter l'accès
76.
Systems and Methods For Deploying Warning Devices From an Autonomous Vehicle
Systems and methods are directed to deploying warning devices by an autonomous vehicle. In one example, a system includes one or more processors and memory including instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations include obtaining data indicating that a vehicle stop maneuver is to be implemented for an autonomous vehicle. The operations further include determining whether one or more warning devices should be dispensed from the autonomous vehicle during the vehicle stop maneuver based in part on the obtained data. The operations further include, in response to determining one or more warning devices should be dispensed from the autonomous vehicle, determining a dispensing maneuver for the one or more warning devices, and providing one or more command signals to one or more vehicle systems to perform the dispensing maneuver for the one or more warning devices.
A control system of a self-driving tractor can access sensor data to determine a set of trailer configuration parameters of a cargo trailer coupled to the self-driving tractor. Based on the set of trailer configuration parameters, the control system can configure a motion planning model for autonomously controlling the acceleration, braking, and steering systems of the tractor.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60W 30/00 - Fonctions des systèmes d'aide à la conduite des véhicules routiers non liées à la commande d'un sous-ensemble particulier, p.ex. de systèmes comportant la commande conjuguée de plusieurs sous-ensembles du véhicule
B62D 13/00 - Direction spécialement adaptée aux remorques
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
78.
System and method for determining object intention through visual attributes
Systems and methods for determining object intentions through visual attributes are provided. A method can include determining, by a computing system, one or more regions of interest. The regions of interest can be associated with surrounding environment of a first vehicle. The method can include determining, by a computing system, spatial features and temporal features associated with the regions of interest. The spatial features can be indicative of a vehicle orientation associated with a vehicle of interest. The temporal features can be indicative of a semantic state associated with signal lights of the vehicle of interest. The method can include determining, by the computing system, a vehicle intention. The vehicle intention can be based on the spatial and temporal features. The method can include initiating, by the computing system, an action. The action can be based on the vehicle intention.
B60W 50/14 - Moyens d'information du conducteur, pour l'avertir ou provoquer son intervention
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
79.
Multi-task machine-learned models for object intention determination in autonomous driving
Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
The present disclosure provides a sensor cleaning system that cleans one or more sensors of an autonomous vehicle. Each sensor can have one or more corresponding sensor cleaning units that are configured to clean such sensor using a fluid (e.g., a gas or a liquid). Thus, the sensor cleaning system can include both a gas cleaning system and a liquid cleaning system. According to one aspect, the sensor cleaning system can provide individualized cleaning of the autonomous vehicle sensors. According to another aspect, a liquid cleaning system can be pressurized or otherwise powered by the gas cleaning system or other gas system.
B60S 1/54 - Nettoyage des pare-brise, fenêtres ou dispositifs optiques utilisant un gaz, p.ex. air chaud
B60S 1/56 - Nettoyage des pare-brise, fenêtres ou dispositifs optiques spécialement adaptés pour nettoyer d'autres parties ou dispositifs que les fenêtres avant ou les pare-brise
81.
Systems and methods for providing a ridesharing vehicle service using an autonomous vehicle
Systems and methods for providing an autonomous vehicle service are provided. A method can include obtaining data indicative of a service associated with a user, and obtaining data indicative of a transportation of an autonomous robot. The method can include determining one or more service configurations for the service. The method can include obtaining data indicative of a selected service configuration from among the one or more service configurations, and determining a service assignment for an autonomous vehicle based at least in part on the selected service configuration. The service assignment can indicate that the autonomous vehicle is to transport the user from the service-start location to the service-end location. The method can include communicating data indicative of the service assignment to the autonomous vehicle to perform the service.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G01C 21/34 - Recherche d'itinéraire; Guidage en matière d'itinéraire
G06Q 10/02 - Réservations, p.ex. pour billetterie, services ou manifestations
Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
G06V 10/28 - Quantification de l’image, p.ex. seuillage par histogramme visant à discriminer entre les formes d’arrière-plan et d’avant-plan
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.
In one example embodiment, a computer-implemented method for autonomous vehicle control includes determining whether a cargo container is attached to the vehicle base. The method includes controlling a front shield associated with an autonomous vehicle to move from a closed position to an opened position when a cargo container is determined to be attached to a vehicle base associated with the autonomous vehicle. The method includes controlling the front shield to move from the opened position to the closed position when the cargo container is not attached to the vehicle base.
B62D 33/08 - Carrosseries pour véhicules à marchandises caractérisées par l'assemblage entre la carrosserie et le châssis de véhicule comportant des moyens de réglage
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B62D 35/00 - Caisses de véhicule caractérisées par leur aérodynamisme
B60Q 1/28 - Agencement des dispositifs de signalisation optique ou d'éclairage, leur montage, leur support ou les circuits à cet effet les dispositifs ayant principalement pour objet d'indiquer le contour du véhicule ou de certaines de ses parties, ou pour engendrer des signaux au bénéfice d'autres véhicules pour indiquer l'avant du véhicule
85.
Determining specular reflectivity characteristics using LiDAR
Aspects of the present disclosure involve systems, methods, and devices for determining specular reflectivity characteristics of objects using a Lidar system of an autonomous vehicle (AV) system. A method includes transmitting at least two light signals directed at a target object utilizing the Lidar system of the AV system. The method further includes determining at least two reflectivity values for the target object based on return signals corresponding to the at least two light signals. The method further includes classifying specular reflectivity characteristics of the target object based on a comparison of the first and second reflectivity value. The method further includes updating a motion plan for the AV system based on the specular reflectivity characteristics of the target object.
G01C 3/08 - Utilisation de détecteurs électriques de radiations
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/93 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions
Systems, methods, tangible non-transitory computer-readable media, and devices associated with object association and tracking are provided. Input data can be obtained. The input data can be indicative of a detected object within a surrounding environment of an autonomous vehicle and an initial object classification of the detected object at an initial time interval and object tracks at time intervals preceding the initial time interval. Association data can be generated based on the input data and a machine-learned model. The association data can indicate whether the detected object is associated with at least one of the object tracks. An object classification probability distribution can be determined based on the association data. The object classification probability distribution can indicate a probability that the detected object is associated with each respective object classification. The association data and the object classification probability distribution for the detected object can be outputted.
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
An autonomous robot is provided. In one example embodiment, an autonomous robot can include a main body including one or more compartments. The one or more compartments can be configured to provide support for transporting an item. The autonomous robot can include a mobility assembly affixed to the main body and a sensor configured to obtain sensor data associated with a surrounding environment of the autonomous robot. The autonomous robot can include a computing system configured to plan a motion of the autonomous robot based at least in part on the sensor data. The computing system can be operably connected to the mobility assembly for controlling a motion of the autonomous robot. The autonomous robot can include a coupling assembly configured to temporarily secure the autonomous robot to an autonomous vehicle. The autonomous robot can include a power system and a ventilation system that can interface with the autonomous vehicle.
G06Q 20/18 - Architectures de paiement impliquant des terminaux en libre-service, des distributeurs automatiques, des bornes ou des terminaux multimédia
B25J 9/08 - Manipulateurs à commande programmée caractérisés par des éléments de construction modulaires
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
In one example embodiment, a computer-implemented method for transporting cargo using smart palettes includes determining receipt of a first cargo onto a platform of a first smart palette at a first distribution hub. The method includes generating one or more signals that control a loading of the first smart palette and the first cargo onto a trailer located at the first distribution hub. The method includes determining a coordination with one or more second smart palettes associated with the trailer to determine a first position inside the trailer for the first smart palette and the first cargo. The method includes generating one or more signals that position the first smart palette and the first cargo at the first position inside the trailer.
Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle. For example, multi-sensor fusion can be implemented via continuous convolutions to fuse image data samples and LIDAR feature maps at different levels of resolution.
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G06K 9/66 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons ou corrélations simultanées de signaux images avec une pluralité de références, p.ex. matrice de résistances avec des références réglables par une méthode adaptative, p.ex. en s'instruisant
90.
Systems and methods for autonomous vehicle controls
Systems and methods for controlling autonomous vehicle are provided. A method can include obtaining, by a computing system, data indicative of a plurality of objects in a surrounding environment of the autonomous vehicle. The method can further include determining, by the computing system, one or more clusters of the objects based at least in part on the data indicative of the plurality of objects. The method can further include determining, by the computing system, whether to enter an operation mode having one or more limited operational capabilities based at least in part on one or more properties of the one or more clusters. In response to determining that the operation mode is to be entered by the autonomous vehicle, the method can include controlling, by the computing system, the operation of the autonomous vehicle based at least in part on the one or more limited operational capabilities.
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
B60W 30/085 - Ajustant automatiquement la position du véhicule en préparation de la collision, p.ex. en freinant pour piquer du nez
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
Systems, methods, tangible non-transitory computer-readable media, and devices associated with vehicle control based on risk-based interactions are provided. For example, vehicle data and perception data can be accessed. The vehicle data can include the speed of an autonomous vehicle in an environment. The perception data can include location information and classification information associated with an object in the environment. A scenario exposure can be determined based on the vehicle data and perception data. Prediction data including predicted trajectories of the object can be accessed. Expected speed data can be determined based on hypothetical speeds and hypothetical distances between the vehicle and the object. A speed profile that satisfies a threshold criteria can be determining based on the scenario exposure, the prediction data, and the expected speed data, over a distance. A motion plan to control the autonomous vehicle can be generated based on the speed profile.
In one example embodiment, a computer-implemented method includes receiving data representing a motion plan of the autonomous vehicle via a plurality of control lanes configured to implement the motion plan to control a motion of the autonomous vehicle, the plurality of control lanes including at least a first control lane and a second control lane, and controlling the first control lane to implement the motion plan. The method includes detecting one or more faults associated with implementation of the motion plan by the first control lane or the second control lane, or in generation of the motion plan, and in response to one or more faults, controlling the first control lane or the second control lane to adjust the motion of the autonomous vehicle based at least in part on one or more fault reaction parameters associated with the one or more faults.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60W 50/029 - Adaptation aux défaillances ou contournement par solutions alternatives, p.ex. en évitant l'utilisation de parties défaillantes
B60W 50/023 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier pour préserver la sécurité en cas de défaillance du système d'aide à la conduite, p.ex. en diagnostiquant ou en palliant à un dysfonctionnement Élimination des défaillances en utilisant des éléments redondants
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p.ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
93.
LIGHT DETECTION AND RANGING (LIDAR) SYSTEM HAVING AN OPTIC TO WIDEN A FIELD OF VIEW
A LIDAR system defining a first axis and a second axis is provided. The LIDAR system includes a first plurality of emitters and a second plurality of emitters. At least one of the first plurality of emitters is configured to emit a first laser beam at a first wavelength. Additionally, at least one of the second plurality of emitters is configured to emit a second laser beam at a second wavelength that is different than the first wavelength. The LIDAR system includes an optic configured to direct the first laser beam and the second laser beam in different directions to widen a field of view of the LIDAR system along the second axis.
Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first plurality of testing parameters for an autonomous vehicle testing scenario associated with a plurality of performance metrics based at least in part on a first sampling rule. The method can include simulating the autonomous vehicle testing scenario using the first plurality of testing parameters to obtain a first scenario output. The method can include evaluating an optimization function that evaluates the first scenario output to obtain simulation error data that corresponds to a performance metric. The method can include determining a second sampling rule associated with the performance metric. The method can include obtaining a second plurality of testing parameters for the autonomous vehicle testing scenario based at least in part on the second sampling rule.
Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first plurality of testing parameters for an autonomous vehicle testing scenario associated with a plurality of performance metrics based at least in part on a first sampling rule. The method can include simulating the autonomous vehicle testing scenario using the first plurality of testing parameters to obtain a first scenario output. The method can include evaluating an optimization function that evaluates the first scenario output to obtain simulation error data that corresponds to a performance metric. The method can include determining a second sampling rule associated with the performance metric. The method can include obtaining a second plurality of testing parameters for the autonomous vehicle testing scenario based at least in part on the second sampling rule.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G06F 30/20 - Optimisation, vérification ou simulation de l’objet conçu
G07C 5/08 - Enregistrement ou indication de données de marche autres que le temps de circulation, de fonctionnement, d'arrêt ou d'attente, avec ou sans enregistrement des temps de circulation, de fonctionnement, d'arrêt ou d'attente
B60W 50/00 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier
96.
Systems and Methods for Generation and Utilization of Vehicle Testing Knowledge Structures for Autonomous Vehicle Simulation
Systems and methods of the present disclosure are directed to a computer-implemented method. The method can include obtaining a first vehicle testing tuple comprising a plurality of first testing parameters and a second vehicle testing tuple comprising a plurality of second testing parameters. The method can include determining that the plurality of first testing parameters are associated with an evaluated operating condition. The method can include appending the first tuple to a first portion of a plurality of portions of a vehicle testing knowledge structure. The method can include determining that a second testing parameter is associated with an unevaluated operating condition. The method can include evaluating the unevaluated operating condition. The method can include generating a second portion comprising the second vehicle testing tuple for the vehicle testing knowledge structure.
B60W 50/02 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier pour préserver la sécurité en cas de défaillance du système d'aide à la conduite, p.ex. en diagnostiquant ou en palliant à un dysfonctionnement
B60W 60/00 - Systèmes d’aide à la conduite spécialement adaptés aux véhicules routiers autonomes
G07C 5/08 - Enregistrement ou indication de données de marche autres que le temps de circulation, de fonctionnement, d'arrêt ou d'attente, avec ou sans enregistrement des temps de circulation, de fonctionnement, d'arrêt ou d'attente
97.
Autonomous Vehicle Interface System With Multiple Interface Devices Featuring Redundant Vehicle Commands
The present disclosure provides an autonomous vehicle and associated interface system that includes multiple vehicle interface computing devices that provide redundant vehicle commands. As one example, an autonomous vehicle interface system can include a first vehicle interface computing device located within the autonomous vehicle and physically coupled to the autonomous vehicle. The first vehicle interface computing device can provide a first plurality of selectable vehicle commands to a human passenger of the autonomous vehicle. The autonomous vehicle interface system can further include a second vehicle interface computing device that provides a second plurality of selectable vehicle commands to the human passenger. For example, the second vehicle interface computing device can be the passenger's own device (e.g., smartphone). The second plurality of selectable vehicle commands can include at least some of the same vehicle commands as the first plurality of selectable vehicle commands.
Systems and methods for autonomous vehicle motion planning are provided. Sensor data describing an environment of an autonomous vehicle and an initial travel path for the autonomous vehicle through the environment can be obtained. A number of trajectories for the autonomous vehicle are generated based on the sensor data and the initial travel path. The trajectories can be evaluated by generating a number of costs for each trajectory. The costs can include a safety cost and a total cost. Each cost is generated by a cost function created in accordance with a number of relational propositions defining desired relationships between the number of costs. A subset of trajectories can be determined from the trajectories based on the safety cost and an optimal trajectory can be determined from the subset of trajectories based on the total cost. The autonomous vehicle can control its motion in accordance with the optimal trajectory.
An autonomous vehicle which includes multiple independent control systems that provide redundancy as to specific and critical safety situations which may be encountered when the autonomous vehicle is in operation.
B60W 30/08 - Anticipation ou prévention de collision probable ou imminente
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B60T 8/1755 - Régulation des freins spécialement adaptée pour la commande de la stabilité du véhicule, p.ex. en tenant compte du taux d'embardée ou de l'accélération transversale dans une courbe
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B60W 50/02 - COMMANDE CONJUGUÉE DE PLUSIEURS SOUS-ENSEMBLES D'UN VÉHICULE, DE FONCTION OU DE TYPE DIFFÉRENTS; SYSTÈMES DE COMMANDE SPÉCIALEMENT ADAPTÉS AUX VÉHICULES HYBRIDES; SYSTÈMES D'AIDE À LA CONDUITE DE VÉHICULES ROUTIERS, NON LIÉS À LA COMMANDE D'UN SOUS-ENSEMBLE PARTICULIER - Détails des systèmes d'aide à la conduite des véhicules routiers qui ne sont pas liés à la commande d'un sous-ensemble particulier pour préserver la sécurité en cas de défaillance du système d'aide à la conduite, p.ex. en diagnostiquant ou en palliant à un dysfonctionnement
100.
Light Detection and Ranging (LIDAR) Assembly Having a Switchable Mirror
A LIDAR assembly is provided. The LIDAR assembly includes a LIDAR unit. The LIDAR unit includes a housing defining a cavity. The LIDAR unit further includes a plurality of emitters disposed within the cavity. Each of the plurality of emitters is configured to emit a laser beam. The LIDAR assembly further includes a switchable mirror. The switchable mirror is positioned relative to the LIDAR unit such that the switchable mirror receives a plurality of laser beams exiting the housing of the LIDAR unit. The switchable mirror is configurable in at least a reflective state to direct the plurality of laser beams along a first path and a transmissive state to direct the plurality of laser beams along a second path that is different than the first path to widen a field of view of the LIDAR unit along a first axis.
G01S 17/931 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour prévenir les collisions de véhicules terrestres
G01S 17/894 - Imagerie 3D avec mesure simultanée du temps de vol sur une matrice 2D de pixels récepteurs, p.ex. caméras à temps de vol ou lidar flash
G01S 7/481 - Caractéristiques de structure, p.ex. agencements d'éléments optiques
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière