The technology involves evaluating components of an autonomous vehicle that can be meaningfully evaluated over a single operational iteration. One or more snapshots of single iteration scenarios can be tested quickly and efficiently, either on vehicle or via a back-end system. Each snapshot corresponds to a particular point in time when a given component runs. Each snapshot contains a serialized set of inputs necessary to evaluate the particular component. These inputs comprise the minimal amount of information needed to accurately and faithfully recreate what the component did or does. Each snapshot is triggered at the particular point in time based on one or more criteria associated with either a driving scenario or a component of the vehicle during autonomous driving. A serialized snapshot may be retrieved from storage and deserialized, so that the system may evaluate the state of the component at the particular point in time.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G06F 16/11 - File system administration, e.g. details of archiving or snapshots
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 40/12 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to parameters of the vehicle itself
B60W 50/04 - Monitoring the functioning of the control system
The technology relates to an exterior sensor system for a vehicle configured to operate in an autonomous driving mode. The technology includes a close-in sensing (CIS) camera system to address blind spots around the vehicle. The CIS system is used to detect objects within a few meters of the vehicle. Based on object classification, the system is able to make real-time driving decisions. Classification is enhanced by employing cameras in conjunction with lidar sensors. The specific arrangement of multiple sensors in a single sensor housing is also important to object detection and classification. Thus, the positioning of the sensors and support components are selected to avoid occlusion and to otherwise prevent interference between the various sensor housing elements.
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
B60W 10/18 - Conjoint control of vehicle sub-units of different type or different function including control of braking systems
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
G01S 7/02 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 13/89 - Radar or analogous systems, specially adapted for specific applications for mapping or imaging
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G05D 1/02 - Control of position or course in two dimensions
H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
The present disclosure relates to systems and methods involving Light Detection and Ranging (LIDAR or lidar) systems. Namely, an example method includes causing a light source of a LIDAR system to emit light along an emission vector. The method also includes adjusting the emission vector of the emitted light and determining an elevation angle component of the emission vector. The method further includes dynamically adjusting a per pulse energy of the emitted light based on the determined elevation angle component. An example system includes a vehicle and a light source coupled to the vehicle. The light source is configured to emit light along an emission vector toward an environment of the vehicle. The system also includes a controller operable to determine an elevation angle component of the emission vector and dynamically adjust a per pulse energy of the emitted light based on the determined elevation angle component.
Aspects of the disclosure provide for determining when to provide and providing secondary disengage alerts for a vehicle having autonomous and manual driving modes. For instance, while the vehicle is being controlled in the autonomous driving mode, user input is received at one or more user input devices of the vehicle. In response to receiving the user input, the vehicle may be transitioned from the autonomous driving mode to a manual driving mode and provide a primary disengage alert to an occupant of the vehicle regarding the transition. Whether to provide a secondary disengage alert may be determined based on at least circumstances of the user input. After the transition, the secondary disengage alert may be provided based on the determination.
Aspects of the disclosure relate to controlling a vehicle in an autonomous driving mode where vehicle has a drive by wire braking system. For instance, while the vehicle is being controlled in the autonomous driving mode, a signal corresponding to input at a brake pedal of the drive by wire braking system may be received. An amount of braking may be determined based on the received signal. The amount of braking may be used to determine a trajectory for the vehicle to follow. The vehicle may be controlled in the autonomous driving mode using the trajectory.
One example system for preventing data loss during memory blackout events comprises a memory device, a sensor, and a controller operably coupled to the memory device and the sensor. The controller is configured to perform one or more operations that coordinate at least one memory blackout event of the memory device and at least one data transmission of the sensor.
Aspects of the disclosure relate to identifying and adjusting speed plans for controlling a vehicle in an autonomous driving mode. In one example, an initial speed plan for controlling speed of the vehicle for a first predetermined period of time corresponding to an amount of time along a route of the vehicle is identified. Data identifying an object and characteristics of that object is received from a perception system of the vehicle. A trajectory for the object that will intersect with the route at an intersection point at particular point in time is predicted using the data. A set of constraints is generated based on at least the trajectory. The speed plan is adjusted in order to satisfy the set of constraints for a second predetermined period of time corresponding to the amount of time. The vehicle is maneuvered in the autonomous driving mode according to the adjusted speed plan.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training trajectory prediction neural networks using distillation.
A system includes a memory device, and a processing device, operatively coupled to the memory device, to receive a set of input data including a representation of a drivable space for an autonomous vehicle (AV), generate, based on the representation of the drivable space, a motion planning blueprint from a flow field modeling the drivable space, and identify, using the motion planning blueprint, a driving path of the AV within the drivable space.
Aspects of the disclosure provide for enabling autonomous vehicles to pull over into driveways when picking up or dropping off passengers or goods. For instance, a request for a trip identifying a first location and a second location may be received. The first location may be a location of a client computing device, and the second location may be a starting location or a destination for the trip. A user preference for the trip indicating that a pickup for the trip be in a driveway may be identified. That the first location corresponds with the second location may be identified. Based on the determination that the first location corresponds with the second location, dispatch instructions may be to an autonomous vehicle. The dispatch instructions may identify a polygon for a driveway at the second location in order to cause the autonomous vehicle to pull over into the driveway.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating simulated trajectories using parallel beam search.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
B60W 50/06 - Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot
12.
ARRANGING PASSENGER PICKUPS FOR AUTONOMOUS VEHICLES
Aspects of the disclosure relate to arranging a pick up and drop off locations between a driverless vehicle and a passenger. As an example, a method of doing so may include receiving a request for a vehicle from a client computing device, wherein the request identifies a first location. Pre-stored map information and the first location are used to identify a recommended point according to a set of heuristics. Each heuristic of the set of heuristics has a ranking such that the recommended point corresponds to a location that satisfies at least one of the heuristics having a first rank and such that no other location satisfies any other heuristic of the set of heuristics having a higher rank than the first rank. The pre-stored map information identifying a plurality of pre-determined locations for the vehicle to stop, and the recommended point is one of the plurality of pre-determined locations. The recommended point is then provided the client computing device for display on a display of the client computing device with a map.
Example embodiments relate to enhanced depth of focus cameras using variable apertures and pixel binning. An example embodiment includes a device. The device includes an image sensor. The image sensor includes an array of light-sensitive pixels and a readout circuit. The device also includes a variable aperture. Additionally, the device includes a controller that is configured to cause: the variable aperture to adjust to a first aperture size when a high-light condition is present, the variable aperture to adjust to a second aperture size when a low-light condition is present, the readout circuit to perform a first level of pixel binning when the high-light condition is present, and the readout circuit to perform a second level of pixel binning when the low-light condition is present. The second aperture size is larger than the first aperture size. The second level of pixel binning is greater than the first level of pixel binning.
H04N 5/347 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by combining or binning pixels in SSIS
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
H04N 5/378 - Readout circuits, e.g. correlated double sampling [CDS] circuits, output amplifiers or A/D converters
14.
Virtual Validation and Verification Model Structure for Motion Control
The technology employs a model structure for motion control in a vehicle configured to operate in an autonomous driving mode. The model structure has components including a vehicle dynamics system module, a column dynamics module, a rack dynamics module, and an actuation control module. A virtual validation and verification model is configurable based on the components of the model structure. Configuration is performed according to a set of operational requirements based on at least one of a vehicle type, occupant loading information, a center of gravity, or tire pressure as per a cold nominal setpoint. The virtual validation and verification model can be executed so that an electric power steering (EPS) module of the model structure components is configured for at least one of: a software-in-loop model, functional EPS assist, angle control, or to emulate an EPS controller.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for the generation and use of a surfel map with semantic labels. One of the methods includes receiving a surfel map that includes a plurality of surfels, wherein each surfel has associated data that includes one or more semantic labels; obtaining sensor data for one or more locations in the environment, the sensor data having been captured by one or more sensors of a first vehicle; determining one or more surfels corresponding to the one or more locations of the obtained sensor data; identifying one or more semantic labels for the one or more surfels corresponding to the one or more locations of the obtained sensor data; and performing, for each surfel corresponding to the one or more locations of the obtained sensor data, a label-specific detection process for the surfel.
Aspects of the disclosure provide a method of facilitating communications from an autonomous vehicle to a user. For instance, a method may include, while attempting to pick up the user and prior to the user entering an vehicle, inputting a current location of the vehicle and map information into a model in order to identify a type of communication action for communicating a location of the vehicle to the user; enabling a first communication based on the type of the communication action; determining whether the user has responded to the first communication from received sensor data; and enabling a second communication based on the determination of whether the user has responded to the communication.
B60Q 5/00 - Arrangement or adaptation of acoustic signal devices
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G08G 1/00 - Traffic control systems for road vehicles
B60Q 1/26 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
17.
Pipeline Architecture for Road Sign Detection and Evaluation
The technology provides a sign detection and classification methodology. A unified pipeline approach incorporates generic sign detection with a robust parallel classification strategy. Sensor information such as camera imagery and lidar depth, intensity and height (elevation) information are applied to a sign detector module. This enables the system to detect the presence of a sign in a vehicle's externa environment. A modular classification approach is applied to the detected sign. This includes selective application of one or more trained machine learning classifiers, as well as a text and symbol detector. Annotations help to tie the classification information together and to address any conflicts with different the outputs from different classifiers. Identification of where the sign is in the vehicle's surrounding environment can provide contextual details. Identified signage can be associated with other objects in the vehicle's driving environment, which can be used to aid the vehicle in autonomous driving.
Aspects of the disclosure provide a method of identifying off-road entry lane waypoints. For instance, a polygon representative of a driveway or parking area may be identified from map information. A nearest lane may be identified based on the polygon. A plurality of lane waypoints may be identified. Each of the lane waypoints may correspond to a location within at least one lane. The polygon and the plurality of lane waypoints may be input into a model. A lane waypoint of the plurality of lane waypoints may be selected as an off-road entry lane waypoint. The off-road entry lane waypoint may be associated with the nearest lane. The association may be provided to an autonomous vehicle in order to allow the autonomous vehicle to use the association to control the autonomous vehicle in an autonomous driving mode.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for open vehicle doors prediction using a neural network model. One of the methods includes: obtaining sensor data (i) that includes a portion of a point cloud generated by a laser sensor of an autonomous vehicle and (ii) that characterizes a vehicle that is in a vicinity of the autonomous vehicle in an environment; and processing the sensor data using an open door prediction neural network to generate an open door prediction that predicts a likelihood score that the vehicle has an open door.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
20.
SIMULATIONS WITH MODIFIED AGENTS FOR TESTING AUTONOMOUS VEHICLE SOFTWARE
The disclosure relates to testing software for operating an autonomous vehicle. In one instance, a simulation may be run using log data collected by a vehicle operating in an autonomous driving mode. The simulation may be run using the software to control a simulated vehicle and by modifying a characteristic of an agent identified in the log data. During the running of the simulation, that a first type of interaction between the first simulated vehicle and the modified agent will occur may be determined. In response to determining that the particular type of interaction will occur, the modified agent may be replaced by a interactive agent that simulates a road user corresponding to the modified agent that is capable of responding to actions performed by simulated vehicles. That the particular type of interaction between the simulated vehicle and the interactive agent has occurred in the simulation may be determined.
Methods and devices for actively modifying a field of view of an autonomous vehicle in view of constraints are disclosed. In one embodiment, an example method is disclosed that includes causing a sensor in an autonomous vehicle to sense information about an environment in a first field of view, where a portion of the environment is obscured in the first field of view. The example method further includes determining a desired field of view in which the portion of the environment is not obscured and, based on the desired field of view and a set of constraints for the vehicle, determining a second field of view in which the portion of the environment is less obscured than in the first field of view. The example method further includes modifying a position of the vehicle, thereby causing the sensor to sense information in the second field of view.
B60R 1/00 - Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
Example embodiments relate to beam homogenization for occlusion avoidance. One embodiment includes a light detection and ranging (LIDAR) device. The LIDAR device includes a transmitter and a receiver. The transmitter includes a light emitter. The light emitter emits light that diverges along a fast-axis and a slow-axis. The transmitter also includes a fast-axis collimation (FAC) lens optically coupled to the light emitter. The FAC lens is configured to receive light emitted by the light emitter and reduce a divergence of the received light along the fast-axis of the light emitter to provide reduced-divergence light. The transmitter further includes a transmit lens optically coupled to the FAC lens. The transmit lens is configured to receive the reduced-divergence light from the FAC lens and provide transmit light. The FAC lens is positioned relative to the light emitter such that the reduced-divergence light is expanded at the transmit lens.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium that determine yield behavior for an autonomous vehicle, and can include identifying an agent that is in a vicinity of an autonomous vehicle navigating through a scene at a current time point. Scene features can be obtained and can include features of (i) the agent and (ii) the autonomous vehicle. An input that can include the scene features can be processed using a first machine learning model that is configured to generate (i) a crossing intent prediction that includes a crossing intent score that represents a likelihood that the agent intends to cross a roadway in a future time window after the current time, and (ii) a crossing action prediction that includes a crossing action score that represents a likelihood that the agent will cross the roadway in the future time window after the current time.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
Example methods and systems for detecting weather conditions using vehicle onboard sensors are provided. An example method includes receiving laser data collected for an environment of a vehicle, and the laser data includes a plurality of laser data points. The method also includes associating, by a computing device, laser data points of the plurality of laser data points with one or more objects in the environment, and determining given laser data points of the plurality of laser data points that are unassociated with the one or more objects in the environment as being representative of an untracked object. The method also includes based on one or more untracked objects being determined, identifying by the computing device an indication of a weather condition of the environment.
LRR of Fig. 3D) may be arranged at different locations within a vehicle to provide notifications, alerts and control options. Information may be dynamically and automatically switched between these displays, as well as a rider's own personal communication device(s) (506 of Fig. 5). What information to present on each screen can depend on factors including how many riders are in the vehicle, their seating within the vehicle (1002 of Fig. 10), how their attention is focused (420 of Fig. 4B) and/or display location and size. Certain information may be mirrored or otherwise duplicated among multiple screens (880 of Fig. 8E) while other information can be presented asymmetrically on different screens (820 of Fig. SB). Presented information may include a "monologue" from the vehicle explaining why a driving action is taken or not taken, alerts about important conditions (660 of Fig. 6D), buttons to control certain functionality of the vehicle (760 of Fig. 7D), or other information that may be of interest to the rider.
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
B60K 35/00 - Arrangement or adaptations of instruments
B60K 37/06 - Arrangement of fittings on dashboard of controls, e.g. control knobs
26.
PLANNING SYSTEM FOR AUTONOMOUSLY NAVIGATING AROUND LANE-SHARING ROAD AGENTS
A system for estimating a spacing profile for a road agent includes a first module and a second module. The first module includes instructions that cause one or more processors to receive data related to characteristics of the road agent and road agent behavior detected in an environment of an autonomous vehicle, initiate an analysis of the road agent behavior, and estimate the spacing profile of the road agent as part of the analysis. The spacing profile includes a lateral gap preference and predicted behaviors of the road agent related to changes in lateral gap. The second module includes instructions that cause the one or more processors to determine one or more components of autonomous vehicle maneuver based on the estimated spacing profile and send control instructions for performing the autonomous vehicle maneuver.
The technology involves pullover maneuvers for vehicles operating in an autonomous driving mode. A vehicle operating in an autonomous driving mode is able to identify parking locations that may be occluded or otherwise not readily visible. A best case estimation for any potential parking locations, including partially or fully occluded areas, is compared against predictions for identified visible parking locations. This can include calculating a cost for each potential parking location and comparing that cost to a baseline cost. Upon determining that an occluded location would provide a more viable option than any visible parking locations, the vehicle is able to initiate a driving adjustment (e.g., slow down) prior to arriving at the occluded location. This enables the vehicle to minimize passenger discomfort while also providing notice to other road users of an intent to pull over.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for predicting how likely it is that a target agent in an environment will yield to another agent when the pair of agents are predicted to have overlapping future paths. In one aspect, a method comprises obtaining a first trajectory prediction specifying a predicted future path for a target agent in an environment; obtaining a second trajectory prediction specifying a predicted future path for another agent in the environment; determining that, at an overlapping region, the predicted future path for the target agent overlaps with the predicted future path for the other agent; and in response: providing as input to a machine learning model respective features for the target agent and the other agent; and obtaining the likelihood score as output from the machine learning model.
Aspects of the disclosure relate to determining perceptive range of a vehicle in real time. For instance, a static object defined in pre-stored map information may be identified. Sensor data generated by a sensor of the vehicle may be received. The sensor data may be processed to determine when the static object is first detected in an environment of the vehicle. A distance between the object and a location of the vehicle when the static object was first detected may be determined. This distance may correspond to a perceptive range of the vehicle with respect to the sensor. The vehicle may be controlled in an autonomous driving mode based on the distance.
B60R 11/04 - Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
G05D 1/02 - Control of position or course in two dimensions
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Systems and methods for automated vehicle sensor calibration and verification are provided. One example method involves monitoring a vehicle using one or more external sensors of a vehicle calibration facility. The sensor data may be indicative of a relative position of the vehicle in the vehicle calibration facility. The method also involves causing the vehicle to navigate in an autonomous driving mode, based on the sensor data, from a current position of the vehicle to a first calibration position in the vehicle calibration facility. The method also involves causing a first sensor of the vehicle to perform a first calibration measurement while the vehicle is at the first calibration position. The method also involves calibrating the first sensor based on at least the first calibration measurement.
The technology relates to providing an enhanced user experience for riders in autonomous vehicles. Two or more displays may be arranged at different locations within a vehicle to provide notifications, alerts and control options. Information may be dynamically and automatically switched between these displays, as well as a rider's own personal communication device(s). What information to present on each screen can depend on factors including how many riders are in the vehicle, their seating within the vehicle, how their attention is focused and/or display location and size. Certain information may be mirrored or otherwise duplicated among multiple screens while other information can be presented asymmetrically on different screens. Presented information may include a “monologue” from the vehicle explaining why a driving action is taken or not taken, alerts about important conditions, buttons to control certain functionality of the vehicle, or other information that may be of interest to the rider.
Systems and methods described herein relate to the manufacture of optical elements and optical systems. An example method includes providing a first substrate that has a plurality of light-emitter devices disposed on a first surface. The method includes providing a second substrate that has a mounting surface defining a reference plane. The method includes forming a structure and an optical spacer on the mounting surface of the second substrate. The method additionally includes coupling the first and second substrates together such that the first surface of the first substrate faces the mounting surface of the second substrate at an angle with respect to the reference plane.
The technology relates to using on-board sensor data, off-board information and a deep learning model to classify road wemess and/or to perform a regression analysis on road wetness based on a set of input information. Such information includes on-board and/or off-board signals obtained from one or more sources including on-board perception sensors, other on-board modules. external weather measurement, external weather services, etc. The ground truth includes measurements of water film thickness and/or ice coverage on road surfaces. The ground truth, on-board and off-board signals are used to build the model. The constructed model can be deployed in autonomous vehicles for classifying/regressing the road wetness with on-board and/or off-board signals as the input, without referring to the ground truth. The model can be applied in a variety of ways to enhance autonomous vehicle operation, for instance by altering current driving actions, modifying planned routes or trajectories, activating on-board cleaning systems, etc.
One example system comprises a light source configured to emit light. The system also comprises a waveguide configured to guide the emitted light from a first end of the waveguide toward a second end of the waveguide. The waveguide has an output surface between the first end and the second end. The system also comprises a plurality of mirrors including a first mirror and a second mirror. The first mirror reflects a first portion of the light toward the output surface. The second mirror reflects a second portion of the light toward the output surface. The first portion propagates out of the output surface toward a scene as a first transmitted light beam. The second portion propagates out of the output surface toward the scene as a second transmitted light beam.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for predicting gaze and awareness using a neural network model. One of the methods includes obtaining sensor data (i) that is captured by one or more sensors of an autonomous vehicle and (ii) that characterizes an agent that is in a vicinity of the autonomous vehicle in an environment at a current time point. The sensor data is processed using a gaze prediction neural network to generate a gaze prediction that predicts a gaze of the agent at the current time point. The gaze prediction neural network includes an embedding subnetwork that is configured to process the sensor data to generate an embedding characterizing the agent, and a gaze subnetwork that is configured to process the embedding to generate the gaze prediction.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
36.
Systems and Methods for Contact Immersion Lithography
The present application relates to contact immersion lithography exposure units and methods of their use. An example contact exposure unit includes a container configured to contain a fluid material and a substrate disposed within the container. The substrate has a first surface and a second surface, and the substrate includes a photoresist material on at least the first surface. The contact exposure unit includes a photomask disposed within the container. The photomask is optically coupled to the photoresist material by way of a gap comprising the fluid material. The contact exposure unit also includes an inflatable balloon configured to be controllably inflated so as to apply a desired force to the second surface of the substrate to controllably adjust the gap between the photomask and the photoresist material.
A method for determining hard example sensor data inputs for training a task neural network is described. The task neural network is configured to receive a sensor data input and to generate a respective output for the sensor data input to perform a machine learning task. The method includes: receiving one or more sensor data inputs depicting a same scene of an environment, wherein the one or more sensor data inputs are taken during a predetermined time period; generating a plurality of predictions about a characteristic of an object of the scene; determining a level of inconsistency between the plurality of predictions; determining that the level of inconsistency exceeds a threshold level; and in response to the determining that the level of inconsistency exceeds a threshold level, determining that the one or more sensor data inputs comprise a hard example sensor data input.
The described aspects and implementations enable fast and accurate verification of radar detection of objects in autonomous vehicle (AV) applications using combined processing of radar data and camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar data characterizing intensity of radar reflections from an environment of the AV, identifying, based on the radar data, a candidate object, obtaining a camera image depicting a region where the candidate object is located, and processing the radar data and the camera image using one or more machine-learning models to obtain a classification measure representing a likelihood that the candidate object is a real object.
G01S 7/41 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisation; Target signature; Target cross-section
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/62 - Methods or arrangements for recognition using electronic means
Aspects of the disclosure provide for automatically generating labels for sensor data. For instance, first sensor data, for a vehicle may be identified. This first sensor data may have been captured by a first sensor of the vehicle at a first location during a first point in time and may be associated with a first label for an object. Second sensor data for the vehicle may be identified. The second sensor data may have been captured by a second sensor of the vehicle at a second location at a second point in time outside of the first point in time. The second location is different from the first location. A determination may be made as to whether the object is a static object. Based on the determination that the object is a static object, the first label may be used to automatically generate a second label for the second sensor data.
Aspects of the disclosure provide for a method of controlling an autonomous vehicle in an autonomous driving mode. For instance, a predicted future trajectory for an object detected in a driving environment of the autonomous vehicle may be received. A routing intent for a planned trajectory for the autonomous vehicle may be received. The predicted future trajectory and the routing intent intersect with one another may be determined. When the predicted future trajectory and the routing intent are determined to intersect with one another, a time gap may be applied to a predicted future state of the object defined in the predicted future trajectory. A planned trajectory may be determined for the autonomous vehicle based on the applied time gap. The autonomous vehicle may be controlled in the autonomous driving mode based on the planned trajectory.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 30/095 - Predicting travel path or likelihood of collision
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. In one aspect, a method for training a policy neural network configured to receive a scene data input and to generate a policy output to be followed by a target agent comprises: maintaining a set of training data, the set of training data comprising (i) training scene inputs and (ii) respective target policy outputs; at each training iteration: generating additional training scene inputs; generating a respective target policy output for each additional training scene input using a trained expert policy neural network that has been trained to receive an expert scene data input comprising (i) data characterizing the current scene and (ii) data characterizing a future state of the target agent; updating the set of training data; and training the policy neural network on the updated set of training data.
The described aspects and implementations enable fast and accurate object identification in autonomous vehicle (AV) applications by combining radar data with camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar image of a first hypothetical object in an environment of the AV, obtaining a camera image of a second hypothetical object in the environment of the AV, and processing the radar image and the camera image using one or more machine-learning models MLMs to obtain a prediction measure representing a likelihood that the first hypothetical object and the second hypothetical object correspond to a same object in the environment of the AV.
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 7/41 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisation; Target signature; Target cross-section
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
44.
DISTANCE-VELOCITY DISAMBIGUATION IN HYBRID LIGHT DETECTION AND RANGING DEVICES
The subject matter of this specification can be implemented in, among other things, a system that includes a first light source to produce a pulsed beam and a second light source to produce a continuous beam, a modulator to impart a modulation to the second beam, and an optical interface subsystem to transmit the pulsed beam and the continuous beam to an outside environment and to detect a plurality of signals reflected from the outside environment. The system further includes one or more circuits configured to identify associations of various reflected pulsed signals, used to detect distance to various objects in the environment, with correct reflected continuous signals, used to detect velocities of the objects. The one or more circuits identify the associations based on the modulation of the detected continuous signals.
Example embodiments relate to pedestrian countdown signal classification to increase pedestrian behavior legibility. An example embodiment includes a method that includes obtaining, by a computing system of a vehicle, a camera image patch. The method further includes determining, by the computing system, using the camera image patch and a pedestrian countdown signal classifier model, a state of a pedestrian countdown signal. The method also includes determining, by the computing system based on the state of the pedestrian countdown signal, a prediction of whether a pedestrian will enter a crosswalk governed by the pedestrian countdown signal. And the method includes, based on the prediction, causing, by the computing system, the vehicle to perform an invitation action that invites a pedestrian to enter the crosswalk.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
B60Q 5/00 - Arrangement or adaptation of acoustic signal devices
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for planning the future trajectory of an autonomous vehicle in an environment. In one aspect, a method comprises obtaining multiple types of scene data characterizing a scene in an environment that includes an autonomous vehicle and multiple agents; receiving route data specifying an intended route for the autonomous vehicle; for each data type, processing at least the data type using a respective encoder network to generate a respective encoding of the data type; processing a sequence of the encodings using an encoder network to generate a respective alternative representation for each data type; and processing the alternative representations using a decoder network to generate a trajectory planning output that comprises respective scores for candidate trajectories that represent predicted likelihoods that the candidate trajectory is closest to resulting in the autonomous vehicle successfully navigating the intended route.
A portable sensor calibration target includes a frame assembly, a first panel, and a second panel. The frame assembly may include three legs and a plurality of frame edges that is configured to form a first frame and a second frame and is configured to be held at a pre-selected height above ground by the legs. The first panel is removably attached to the first frame in an unfolded position, and includes a plurality of boards and a plurality of hinges connecting the plurality of boards. The first panel is configured to fold at the plurality of hinges into a folded position. The second panel is removably attached to the second frame adjacent to the first frame. The first panel and the second panel meet to form an edge, which is detectable by a detection system of a vehicle for calibrating the detection system.
Aspects of the disclosure provide for controlling an autonomous vehicle using no block costs in space and time. For instance, a trajectory for the autonomous vehicle to traverse in order to follow a route to a destination may be generated. A set of no-block zones through which the trajectory traverses may be identified. A no-block zone may be region where the autonomous vehicle should not stop but can drive through in an autonomous driving mode. For each given no-block zone of the set, a penetration cost that increases towards a center of the no-block zone and decreases towards edges of the no-block zone may be determined. Whether the autonomous vehicle should follow the trajectory may be determined based on the penetration cost. An autonomous vehicle may be controlled in the autonomous driving mode according to the trajectory based on the determination of whether the autonomous vehicle should follow the trajectory.
Aspects of the disclosure relate to generating assets for a simulation in order to test autonomous vehicle control software. For instance, data identifying a vehicle object, a polygon for the vehicle object, and a polygon for an open-door object associated with the vehicle object may be received. A rectangle may be fit to the polygon for the vehicle object. A polygon for the vehicle object without the open-door object may be determined based on the rectangle. A vehicle asset representing a vehicle may be adjusted based on the polygon for the vehicle object without the open-door object. A position and rotation of the rectangle may be used to position an open-door asset representing an open door for the simulation. The open-door asset is adjusted based on the adjustment to the vehicle asset. The simulation may be run with the positioned and adjusted open door asset in order to test the autonomous vehicle control software.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, that determine yield behavior for an autonomous vehicle. An agent that is in a vicinity of an autonomous vehicle can be identified. An obtained crossing intent prediction characterizes a predicted likelihood that the agent intends to cross a roadway during a future time period. First features of the agent and of the autonomous vehicle are obtained. An input that includes the first features and the crossing intent prediction is processed using a machine learning model to generate an intent yielding score that represents a likelihood that the autonomous vehicle should perform a yielding behavior due to the intent of the agent to cross the roadway. From at least the intent yielding score, an intent yield behavior signal is determined and indicates whether the autonomous vehicle should perform the yielding behavior prior to reaching the first crossing region.
Examples described relate to systems, methods, and apparatus for transmitting image data. The apparatus may comprise a memory buffer configured to store data elements of an image frame and generate a signal indicating that each of the data elements of the image frame have been written to the memory buffer, a processor configured to initiate a flush operation for reading out at least one data element of the image frame from the memory buffer and to output the at least one data element at a first rate, a rate adjustment unit configured to receive the at least one data element from the processor at the first rate and to output the at least one data element at a second rate, and a multiplexer configured to receive the at least one data element from the processor at the first rate and configured to receive the at least one data element from the rate adjustment unit at the second rate. The multiplexer may select the at least one data element at the second rate for transmitting on a bus in response to receiving the signal from the memory buffer.
Example implementations are provided for an arrangement of co-aligned rotating sensors. One example device includes a light detection and ranging (LIDAR) transmitter that emits light pulses toward a scene according to a pointing direction of the device. The device also includes a LIDAR receiver that detects reflections of the emitted light pulses reflecting from the scene. The device also includes an image sensor that captures an image of the scene based on at least external light originating from one or more external light sources. The device also includes a platform that supports the LIDAR transmitter, the LIDAR receiver, and the image sensor in a particular relative arrangement. The device also includes an actuator that rotates the platform about an axis to adjust the pointing direction of the device.
An integrated hybrid rotary assembly is configured to provide power, torque and bi-directional communication to a rotatable sensor, such as a lidar, radar or optical sensor. A common ferrite core is shared by a motor, rotary transformer and radio frequency communication link. This hybrid configuration reduces cost, simplifies the manufacturing process, and can improve system reliability by employing a minimum number of parts. The assembly can be integrated with the sensor unit, which may be used in vehicles and other systems.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an optical flow label from a lidar point cloud. One of the methods includes obtaining data specifying a training example, including a first image of a scene in an environment captured at a first time point and a second image of the scene in the environment captured at a second time point. For each of a plurality of lidar points, a respective second corresponding pixel in the second image is obtained and a respective velocity estimate for the lidar point at the second time point is obtained. A respective first corresponding pixel in the first image is determined using the velocity estimate for the lidar point. A proxy optical flow ground truth for the training example is generated based on an estimate of optical flow of the pixel between the first and second images.
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating motion detection based on optical flow. One of the methods includes obtaining a first image of a scene in an environment taken by an agent at a first time point and a second image of the scene at a second later time point. A point cloud characterizing the scene in the environment is obtained. A predicted optical flow is determined between the first image and the second image. A respective initial flow prediction for the point that represents motion of the point between the two time points is determined. A respective ego motion flow estimate for the point that represents a motion of the point induced by ego motion of the agent is determined. A respective motion prediction that indicates whether the point was static or in motion between the two time points is determined.
Aspects of the disclosure provide for controlling an autonomous vehicle. For instance, a trajectory for the autonomous vehicle to traverse in order to follow a route to a destination may be generated. A first error value for a boundary of an object, a second error value for a location of the autonomous vehicle, a third error value for a predicted future location of the object may be received. An uncertainty value for the object may be determined by combining the first error value, the second error value, and the third error value. A lateral gap threshold for the object may be determined based on the uncertainty value. The autonomous vehicle may be controlled in an autonomous driving mode based on the lateral gap threshold for the object.
The subject matter of this specification can be implemented in, among other things, systems and methods of optical sensing that utilize optimized processing of multiple sensing channels for efficient and reliable scanning of environments. The optical sensing includes multiple optical communication lines that include coupling portions configured to facilitate efficient collection of various received beams. The optical sensing system further includes multiple light detectors configured to process collected beams and produce data representative of a velocity of an object that generated the received beam and/or a distance to that object.
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G02B 6/122 - Basic optical elements, e.g. light-guiding paths
An autonomous vehicle configured for active sensing may also be configured to weigh expected information gains from active-sensing actions against risk costs associated with the active-sensing actions. An example method involves: (a) receiving information from one or more sensors of an autonomous vehicle, (b) determining a risk-cost framework that indicates risk costs across a range of degrees to which an active-sensing action can be performed, wherein the active-sensing action comprises an action that is performable by the autonomous vehicle to potentially improve the information upon which at least one of the control processes for the autonomous vehicle is based, (c) determining an information-improvement expectation framework across the range of degrees to which the active-sensing action can be performed, and (d) applying the risk-cost framework and the information-improvement expectation framework to determine a degree to which the active-sensing action should be performed.
B60W 30/08 - Predicting or avoiding probable or impending collision
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
The present disclosure relates to systems, vehicles, and methods for detecting optical defects in an optical path of a camera system. An example system may include an image sensor configured to provide images of a field of view via an optical path that extends through an optical window. The system also includes at least one phase-detection device and a controller. The controller is configured to execute instructions stored in the memory so as to carry out various operations, including receiving, from the image sensor, first pixel information indicative of an image of the field of view. The operations additionally include receiving, from the at least one phase-detection device, second pixel information indicative of a portion of the field of view. The operations yet further include determining, based on the first pixel information and the second pixel information, at least one optical defect associated with the optical path.
The technology relates to approaches for determining appropriate stopping locations at intersections for vehicles operating in a self-driving mode. While many intersections have stop lines painted on the roadway, many others have no such lines. Even if a stop line is present, the physical location may not match what is in store map data, which may be out of date due to construction or line repainting. Aspects of the technology employ a neural network that utilizes input training data and detected sensor data to perform classification, localization and uncertain estimation processes. Based on these processes, the system is able to evaluate distribution information for possible stop locations. The vehicle uses such information to determine an optimal stop point, which may or may not correspond to a stop line in the map data. This information is also used to update the existing map data, which can be shared with other vehicles.
The subject matter of this specification can be implemented in, among other things, systems and methods of optical sensing that utilize optimized processing of multiple sensing channels for efficient and reliable scanning of environments. The optical sensing includes multiple optical communication lines that include coupling portions configured to facilitate efficient collection of various received beams. The optical sensing system further includes multiple light detectors configured to process collected beams and produce data representative of a velocity of an object that generated the received beam and/or a distance to that object.
An in-vehicle communication network of a vehicle is monitored. An illicit signal is detected on the in-vehicle communication network. Whether the illicit signal satisfies a threshold severity condition is determined. A denial of service (DoS) operation with respect to at least part of the in-vehicle communication network is performed responsive to determining that the illicit signal satisfies the threshold severity condition.
An illicit signal is detected on an in-vehicle communication network of an autonomous vehicle. A severity level corresponding to the illicit signal is identified, among multiple severity levels, based on one or more characteristics associated with the illicit signal. The severity level is indicative of a level of adverse impact on safety related to an autonomous vehicle environment. The adverse impact is to be caused by the autonomous vehicle when the autonomous vehicle is compromised by the illicit signal. A security operation is selected from multiple security operations based on the identified severity level. The security operation is performed to mitigate the adverse impact on safety related to the autonomous vehicle environment.
Aspects of the disclosure provide for automatically generating labels for sensor data. For instance first sensor data for a first vehicle is identified. The first sensor data is defined in both a global coordinate system and a local coordinate system for the first vehicle. A second vehicle is identified based on a second location of the second vehicle within a threshold distance of the first vehicle within the first timeframe. The second vehicle is associated with second sensor data that is further associated with a label identifying a location of an object, and the location of the object is defined in a local coordinate system of the second vehicle. A conversion from the local coordinate system of the second vehicle to the local coordinate system of the first vehicle may be determined and used to transfer the label from the second sensor data to the first sensor data.
G06K 9/62 - Methods or arrangements for recognition using electronic means
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
G01C 21/36 - Input/output arrangements for on-board computers
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
This technology relates to a system for cooling sensor components. The cooling system may include a sensor which has a sensor housing, a motor, a main vent, and a side vent. Internal sensor components may be positioned within the sensor housing. The motor may be configured to rotate the sensor housing around an axis. The rotation of the sensor housing may pull air into an interior portion of the sensor housing through the main vent, and the air pulled into the interior portion of the sensor housing may be exhausted out of the interior portion of the sensor housing through the side vent.
An apparatus includes an optical emitter configured to emit a first optical signal along an optical path towards a target object in an outdoor environment. The apparatus includes an optical detector positioned collinearly with respect to the optical emitter. The optical detector is configured to detect a second optical signal that is retro-reflected from the target object. The apparatus includes a lock-in amplifier coupled to the optical detector. The lock-in amplifier is configured to generate, based on the first optical signal and the second optical signal, a signal indicative of a retro-reflectivity of the target object in the outdoor environment.
The present disclosure relates to optical systems, vehicles, and methods that are configured to illuminate and image a wide field of view of an environment. An example optical system includes a camera having an optical axis and an outer lens element disposed along the optical axis. The optical system also includes a plurality of illumination modules, each of which includes at least one light-emitter device configured to emit light along a respective emission axis and a secondary optical element optically coupled to the at least one light-emitter device. The secondary optical element is configured to provide a light emission pattern having an azimuthal angle extent of at least 170 degrees so as to illuminate a portion of an environment of the optical system.
The disclosure relates collecting feedback from passengers of autonomous vehicles. For instance, that a triggering circumstance for triggering a feedback request has been met may be determined. The triggering circumstance may include a driving event, a presence of other road users, or a trip state. A display requirement and data collection parameters for the feedback request are identified based on the determination. The display requirement defines when the feedback request is displayed and the data collection parameters identify information that the feedback request is to collect. The feedback request is provided for display based on the display requirement and data collection parameters. In response, feedback from a passenger of the autonomous vehicle is received and stored for later use.
The subject matter of this specification can be implemented in, among other things, systems and methods that enable lidar devices capable of detecting and processing multiple optical modes present in a beam reflected from a target object. Different received optical modes can be spatially separated and electronic signals can be generated that are representative of a coherence information contained in various optical modes. Multiple generated electronic signals can be amplified, phase-shifted, mixed, etc., to identify signals, individually or in a combination, that can be used for identification of a range and velocity of the target object with the highest accuracy.
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Aspects of the disclosure provide for generating distributions for hypothetical or potentially occluded objects. For instance, a location for which to generate one or more distributions may be identified. Observations of road users by perception systems of a plurality of autonomous vehicles may be accessed. Each of these observations may identify a characteristic of one of the road users. A distribution of the characteristic for the location may be determined based on the observations. The distribution may be provided to one or more autonomous vehicles in order to enable the one or more autonomous vehicles to use the distribution to generate a characteristic for a hypothetical occluded road user and to respond to the hypothetical occluded road user.
Example embodiments relate to radar reflection filtering using a vehicle sensor system. A computing device may detect a first object in radar data from a radar unit coupled to a vehicle and, responsive to determining that information corresponding to the first object is unavailable from other vehicle sensors, use the radar data to determine a position and a velocity for the first object relative to the radar unit. The computing device may also detect a second object aligned with a vector extending between the radar unit and the first object. Based on a geometric relationship between the vehicle, the first object, and the second object, the computing device may determine that the first object is a self-reflection of the vehicle caused at least in part by the second object and control the vehicle based on this determination.
Aspects of the disclosure relate to training a labeling model to automatically generate labels for objects detected in a vehicle's environment. In this regard, one or more computing devices may receive sensor data corresponding to a series of frames perceived by the vehicle, each frame being captured at a different time point during a trip of the vehicle. The computing devices may also receive bounding boxes generated by a first labeling model for objects detected in the series of frames. The computing devices may receive user inputs including an adjustment to at least one of the bounding boxes, the adjustment corrects a displacement of the at least one of the bounding boxes caused by a sensing inaccuracy. The computing devices may train a second labeling model using the sensor data, the bounding boxes, and the adjustment to increase accuracy of the second labeling model when automatically generating bounding boxes.
G01S 17/58 - Velocity or trajectory determination systems; Sense-of-movement determination systems
G01S 17/04 - Systems determining the presence of a target
G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
75.
Unique signaling for vehicles to preserve user privacy
Aspects of the present disclosure relate to protecting the privacy of a user of a dispatching service for driverless vehicles. For example, a request for a vehicle identifying user information is received. A client computing device may be identified based on the user information. In response to the request, a driverless vehicle may be dispatched to the location of the client device. Signaling information may be generated based on a set of rules including a first rule that the signaling information does not identify, indirectly or directly, the user as well as a second rule that the signaling information does not identify, indirectly or directly, the user information. The location of the client computing device and the signaling information may be sent to the driverless vehicle for display. In addition, the signaling information may also be sent to the client computing device for display.
B60Q 1/50 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G08G 1/00 - Traffic control systems for road vehicles
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining the likelihood that a particular event would occur during a navigation interaction using simulations generated by sampling from agent data. In one aspect, a method comprises: identifying an instance of a navigation interaction that includes an autonomous vehicle and agents navigating in an environment; generating multiple simulated interactions corresponding to the instance, comprising, for each simulated interaction: identifying one or more agents; for each identified agent and for each property that characterizes behavior of the identified agent, obtaining a probability distribution for the property; sampling a respective value from each of the probability distributions; and simulating the navigation interaction in accordance with the sampled values; and determining a likelihood that the particular event would occur during the navigation interaction based on whether the particular event occurred during each of the simulated interactions.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing an accelerated convolution on sparse inputs. In one aspect, a method comprises receiving sensor data input comprising input features for input spatial locations; and processing the sensor data input using a convolutional neural network having a first convolutional layer with a filter having multiple filter spatial locations to generate a network output comprising output features for output spatial locations, wherein processing the sensor data input comprises: obtaining a rule book tensor that identifies for each filter spatial location (i) a subset of the input features, and (ii) for each input feature in the subset, a respective output feature; for each particular filter spatial location: generating input tile, filter tile, and output tile sets in accordance with the rule book tensor; and generating the output features in the output tile set based on the tile sets.
Aspects of the disclosure relate to an autonomous vehicle that may detected other nearby vehicles and identify them as parked or unparked. This identification may be based on visual indicia displayed by the detected vehicles as well as traffic control factors relating to the detected vehicles. Detected vehicles that are in a known parking spot may automatically be identified as parked. In addition, detected vehicles that satisfy conditions that are indications of being parked may also be identified as parked. The autonomous vehicle may then base its control strategy on whether or not a vehicle has been identified as parked or not.
G08G 1/015 - Detecting movement of traffic to be counted or controlled with provision for distinguishing between motor cars and cycles
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G08G 1/052 - Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Aspects of the disclosure relate to classifying the status of objects. For examples, one or more computing devices detect an object from an image of a vehicle's environment. The object is associated with a location. The one or more computing devices receive data corresponding to the surfaces of objects in the vehicle's environment and identifying data within a region around the location of the object. The one or more computing devices also determine whether the data within the region corresponds to a planar surface extending away from an edge of the object. Based on this determination, the one or more computing devices classify the status of the object.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06K 9/62 - Methods or arrangements for recognition using electronic means
G05D 1/02 - Control of position or course in two dimensions
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
80.
MASS DISTRIBUTION-INFORMED OPTIMIZATION FOR AUTONOMOUS DRIVING SYSTEMS
A method includes identifying sensor data associated with corresponding distal ends of one or more axles of an autonomous vehicle (AV). The method further includes determining, based on the sensor data, mass distribution data of the AV. The mass distribution data is associated with a first load proximate a first distal end of a first axle of the AV and a second load proximate a second distal end of the first axle of the AV. The method further includes causing, based on the mass distribution data, performance of a corrective action associated with the AV.
The present disclosure relates to contaminant detection systems and related optical systems and methods. An example contaminant detection system includes an optical coupler configured to couple light into and/or out of an optical element. The contaminant detection system also includes a plurality of light-emitter devices configured to emit emission light toward the optical coupler. The contaminant detection system additionally includes a plurality of detector devices configured to detect at least a portion of the emission light by way of the optical element and the optical coupler. The plurality of detector devices is also configured to provide detector signals indicative of a presence of a contaminant on the optical element.
Aspects of the technology employ sensors having high speed rotating mirror assemblies. For instance, the sensors may be Lidar sensors configured to detect people and other objects in an area of interest. A given mirror assembly may have a triangular or other geometric cross-sectional shape. The reflective faces of the mirror assembly may connect along edges or corners. In order to minimize wind drag and torque issues, the corners are rounded, filleted, beveled, chamfered or otherwise truncated. Such truncation may extend the length of the mirror side. The mirror assembly may employ one or more beam stops, light baffles and/or acoustic/aerodynamic baffles. These sensors may be employed with self-driving or manual driven vehicles or other equipment. The sensors may also be used in and around buildings.
A vehicle configured to operate in an autonomous mode can obtain sensor data from one or more sensors observing one or more aspects of an environment of the vehicle. At least one aspect of the environment of the vehicle that is not observed by the one or more sensors could be inferred based on the sensor data. The vehicle could be controlled in the autonomous mode based on the at least one inferred aspect of the environment of the vehicle.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
B60W 40/00 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
G01S 13/88 - Radar or analogous systems, specially adapted for specific applications
G01S 15/88 - Sonar systems specially adapted for specific applications
G01S 17/88 - Lidar systems, specially adapted for specific applications
84.
Systems and Methods for Retroreflector Mitigation Using Lidar
The present disclosure relates to light detection and ranging (lidar) systems, lidar-equipped vehicles, and associated methods. An example method includes causing a firing circuit to trigger emission of an initial group of detection pulses from at least one light-emitter device of a lidar system in accordance with an initial set of one or more light-emission parameters. The method also includes causing the firing circuit to trigger emission of one or more test pulses and receiving, from at least one detector, information indicative of one or more return test pulses. The method yet further includes determining, based on the received information, a presence of a retroreflector based on an intensity of the return test pulse. The method additionally includes determining a subsequent set of light-emission parameters and causing the firing circuit to trigger emission of a subsequent group of detection pulses in accordance with the subsequent set of light-emission parameters.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
85.
HANDLING MANEUVER LIMITS FOR AUTONOMOUS DRIVING SYSTEMS
A method includes identifying mass distribution data of an autonomous vehicle (AV). The mass distribution data is associated with a first load proximate a first distal end of a first axle of the AV and a second load proximate a second distal end of the first axle of the AV. The method further includes determining, based on the mass distribution data, one or more handling maneuver limits for the AV. The method further includes causing the AV to travel a route based on the one or more handling maneuver limits.
The disclosure relate to detecting performance regressions in software used to control autonomous vehicles. For instance, a simulation may be run using a first version of the software. While the simulation is running, CPU and memory usage by one or more functions of the first version of the software may be sampled. The sampled CPU and memory usage may be compared to CPU or memory usage by each of the one or more functions in a plurality of simulations each running a corresponding second version of the software. Based on the comparisons, an anomaly corresponding to a performance regression in the first version of the software relating to one of the one or more functions may be identified. In response to detecting the anomaly, the first version of the software and the one of the one or more functions may be flagged for review.
Aspects of the disclosure provide a method of providing a destination to an autonomous vehicle in order to enable the autonomous vehicle to collect data according to a targeted driving goal. For instance, a current location of an autonomous vehicle may be received. A set of destinations may be selected from a plurality of predetermined destinations. A route may be determined for each destination. A relevance score may be determined for each destination based on the determined routes and the targeted driving goal. Each destination may be assigned to one of a set of two or more buckets based on the relevance scores. A destination of the set may be selected based on a predetermined sampling probability. The selected destination is sent to the autonomous vehicle in order to cause the autonomous vehicle to travel to the selected destination in an autonomous driving mode.
Example embodiments relate to techniques that involve detecting and mitigating automotive interference. Electromagnetic signals propagating in the environment can be received by a radar unit that limits the signals received to a particular angle of arrival with reception antennas that limit the signals received to a particular polarization. Filters can be applied to the signals to remove portions that are outside an expected time range and an expected frequency range that depend on radar signal transmission parameters used by the radar unit. In addition, a model representing an expected electromagnetic signal digital representation can be used to remove portions of the signals that are indicative of spikes and plateaus associated with signal interference. A computing device can then generate an environment representation that indicates positions of surfaces relative to the vehicle using the remaining portions of the signals.
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 13/89 - Radar or analogous systems, specially adapted for specific applications for mapping or imaging
G01S 7/02 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
The present disclosure relates to optical devices and systems, specifically those related to light detection and ranging (LIDAR) systems. An example device includes a shaft defining a rotational axis. The shaft includes a first material having a first coefficient of thermal expansion. The device also includes a rotatable mirror disposed about the shaft. The rotatable mirror includes a multi-sided structure having an exterior surface and an interior surface. The multi-sided structure includes a second material having a second coefficient of thermal expansion. The second coefficient of thermal expansion is different from the first coefficient of thermal expansion. The multi-sided structure also includes a plurality of reflective surfaces disposed on the exterior surface of the multi-sided structure. The multi-sided structure yet further includes one or more support members coupled to the interior surface and the shaft.
A system includes multiple microphone arrays positioned at different locations on a roof of an autonomous vehicle. Each microphone array includes two or more microphones. Internal clocks of each microphone array are synchronized by a processor and used to generate timestamps indicating when microphones capture a sound. Based on the timestamps, the processor is configured to localize a source of the sound.
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
Aspects of the disclosure may enable autonomous vehicles to respond to emergency vehicles. For instance, sensor data identifying an emergency vehicle approaching the autonomous vehicle may be received. A predicted trajectory for the emergency vehicle may be received. Whether the autonomous vehicle is impeding the emergency vehicle may be determined based on the predicted trajectory and map information identifying a road on which the autonomous vehicle is currently traveling. Based on a determination that the autonomous vehicle is impeding the emergency vehicle, the autonomous vehicle may be controlled in an autonomous driving mode in order to respond to the emergency vehicle.
The technology relates to a system for clearing a sensor cover. The system may comprise a wiper comprising a wiper support, a wiper blade, and a sensor cover. The wiper blade may be configured to clear the sensor cover of debris, and the sensor cover may be configured to house one or more sensors. A wiper motor may rotate the wiper and a sensor motor may rotate the sensor cover. The system wiper blade may comprise a first edge attached to the wiper support and a second edge which may be configured to be in contact with the sensor cover. The wiper blade may extend in a corkscrew shape around the wiper support. The wiper motor may be configured to rotate the wiper in a first direction and the sensor motor may be configured to rotate the sensor cover in a second direction opposite the first direction.
Rotatable mirror assemblies and light detection and ranging systems containing rotatable mirror assemblies are described herein. An example rotatable mirror assembly may include (1) a housing having a top end, a bottom end, and a longitudinal axis intersecting the top and bottom ends, and (2) a set of reflective surfaces, where each reflective surface in the set is coupled to the top end of the housing and the bottom end of the housing such that each reflective surface possesses limited freedom of movement with respect to the housing.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium that generates lane path descriptors for use by autonomous vehicles. One of the methods includes receiving data that defines valid lane paths in a scene in an environment. Each valid lane path represents a path through the scene that can be traversed by a vehicle. User interface presentation data can be provided to a user device. The user interface can contain: (i) a first display area that displays a first visual representation of the sensor measurement; and (ii) a second display area that displays a second visual representation of the set of valid lane paths. User input modifying the second visual representation of the set of valid lane paths can be received; and in response to receiving the user input, the set of valid lane paths of the scene in the environment can be modified.
A system includes a memory device, and a processing device, operatively coupled to the memory device, to receive a set of input data including a roadgraph. The roadgraph includes an autonomous vehicle driving path. The processing device is further to determine that the autonomous vehicle driving path is affected by one or more obstacles, identify a set of candidate paths that avoid the one or more obstacles, each candidate path of the set of candidate paths being associated with a cost value, select, from the set of candidate paths, a candidate path with an optimal cost value to obtain a selected candidate path, generate a synthetic scene based on the selected candidate path, and train a machine learning model to navigate an autonomous vehicle based on the synthetic scene.
A system includes a memory device, and a processing device, operatively coupled to the memory device, to receive a set of input data including a roadgraph, the roadgraph including an autonomous vehicle driving path, modify the roadgraph to obtain a modified roadgraph by adjusting a trajectory of the autonomous vehicle driving path, place a set of artifacts along one or more lane boundaries of the modified roadgraph to generate a synthetic scene, and train a machine learning model used to navigate an autonomous vehicle based on the synthetic scene.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating roadway crossing intent labels for training a machine learning model to perform roadway crossing intent predictions. One of the methods includes obtaining data identifying a training input, the training input including data characterizing an agent in an environment as of a given time, wherein the agent is located in a vicinity of a roadway in the environment at the given time. Future data characterizing (i) the agent, (ii) the environment or (iii) both over a future time period that is after the given time is obtained. From the future data, an intent label that indicates a likelihood that the agent intended to cross the roadway at the given time is determined. The training input is associated with the intent label in training data for training the machine learning model.
Aspects of the disclosure provide for the evaluation of a scheduling system software for managing autonomous vehicle scheduling and dispatching. For instance, a problem condition for a simulation may be identified. The simulation may be run using the identified problem condition. The simulation may include a plurality of simulated autonomous vehicles each utilizing its own autonomous vehicle control software and map information common to each simulated autonomous vehicle. The problem condition may relate to a particular simulated autonomous vehicle of the plurality. Output of the simulation may be analyzed to determine a score for the scheduling system software. The scheduling system software may be evaluated using the score.
Aspects and implementations of the present disclosure relate to modeling of positional uncertainty of moving objects using precomputed polygons, for example, for the purposes of computing autonomous vehicle (AV) trajectories. An example method includes: receiving, by a data processing system of an AV, data descriptive of an agent state of an object; generating a polygon representative of the agent state; identifying extreme vertices of the polygon along a longitudinal axis parallel to a heading direction of the object or along a lateral axis orthogonal to the heading direction; and applying, based on the extreme vertices, at least one expansion transformation to the polygon along the longitudinal axis or the lateral axis to generate a precomputed polygon.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 30/095 - Predicting travel path or likelihood of collision
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
A neural network system for identifying positions of objects in an input image can include an object detector neural network, a memory interface subsystem, and an external memory. The object detector neural network is configured to, at each time step of multiple successive time steps, (i) receive a first neural network input that represents the input image and a second neural network input that identifies a first set of positions of the input image that have each been classified as showing a respective object of the set of objects, and (ii) process the first and second inputs to generate a set of output scores that each represents a respective likelihood that an object that is not one of the objects shown at any of the positions in the first set of positions is shown at a respective position of the input image that corresponds to the output score.