Example embodiments relate to selective deactivation of light emitters for interference mitigation in light detection and ranging (lidar) devices. An example method includes deactivating one or more light emitters within a lidar device during a firing cycle. The method also includes identifying whether interference is influencing measurements made by the lidar device. Identifying whether interference is influencing measurements made by the lidar device includes determining, for each light detector of the lidar device that is associated with the one or more light emitters deactivated during the firing cycle, whether a light signal was detected during the firing cycle.
The technology relates to an exterior sensor system for a vehicle configured to operate in an autonomous driving mode. The technology includes a close-in sensing (CIS) camera system to address blind spots around the vehicle. The CIS system is used to detect objects within a few meters of the vehicle. Based on object classification, the system is able to make real-time driving decisions. Classification is enhanced by employing cameras in conjunction with lidar sensors. The specific arrangement of multiple sensors in a single sensor housing is also important to object detection and classification. Thus, the positioning of the sensors and support components are selected to avoid occlusion and to otherwise prevent interference between the various sensor housing elements.
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
B60R 11/04 - Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
B60R 21/0134 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle
B60W 10/18 - Conjoint control of vehicle sub-units of different type or different function including control of braking systems
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
B60W 30/08 - Predicting or avoiding probable or impending collision
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
The technology relates to an exterior sensor system for a vehicle configured to operate in an autonomous driving mode. The technology includes a close-in sensing (CIS) camera system to address blind spots around the vehicle. The CIS system is used to detect objects within a few meters of the vehicle. Based on object classification, the system is able to make real-time driving decisions. Classification is enhanced by employing cameras in conjunction with lidar sensors. The specific arrangement of multiple sensors in a single sensor housing is also important to object detection and classification. Thus, the positioning of the sensors and support components are selected to avoid occlusion and to otherwise prevent interference between the various sensor housing elements.
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60R 1/22 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 13/931 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of land vehicles
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating vehicle intent predictions using a neural network. One of the methods includes obtaining an input characterizing one or more vehicles in an environment; generating, from the input, features of each of the vehicles; and for each of the vehicles: processing the features of the vehicle using each of a plurality of intent-specific neural networks, wherein each of the intent-specific neural networks corresponds to a respective intent from a set of intents, and wherein each intent-specific neural network is configured to process the features of the vehicle to generate an output for the corresponding intent.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing point cloud data using spatio-temporal-interactive networks. Embodiments describes a system implemented as computer programs on one or more computers in one or more locations that process a temporal sequence of point cloud data inputs to make predictions about agents, e.g., pedestrians, vehicles, bicyclists, motorcyclists, or other moving objects, characterized by the point cloud data inputs.
G06K 9/62 - Methods or arrangements for recognition using electronic means
B60W 30/095 - Predicting travel path or likelihood of collision
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a depth map of a scene from a camera image using a neural network. One of the methods includes obtaining an image captured by a first sensor. A neural network processes the image to generate a respective score for each of a plurality of locations in the image. Known depth data specifying respective known depth values for some of the locations in the image is obtained. A depth output is generated that assigns a depth value to some of the locations in the image, including determining whether the score for a location exceeds a threshold; and when the score exceeds the threshold and the known depth value is available for the location, assigning the known depth value for the location to the location in the depth output.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for agent trajectory prediction using vectorized inputs.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
Aspects of the disclosure relate to signaling for tums for a vehicle having an autonomous driving mode. For instance, a trajectory that the vehicle will follow for some period of time into the future may be received. The at least a portion may be processed to identify a turning event, the turning event corresponding to a location where the vehicle plans to tum and for which the vehicle will need to use a tum signal. Whether the trajectory includes a negative turning event located some threshold distance before the turning event may be determined. The negative turning event corresponds to a location along the trajectory where the vehicle could make a tum, but does not plan to make a tum. While the vehicle is operating in the autonomous driving mode, the tum signal of the vehicle may be activated based on the turning event and the determination.
B60Q 1/34 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating change of drive direction
Aspects of the disclosure provide a method of facilitating communications from an autonomous vehicle to a user. For instance, a method may include, while attempting to pick up the user and prior to the user entering an vehicle, inputting a current location of the vehicle and map information into a model in order to identify a type of communication action for communicating a location of the vehicle to the user; enabling a first communication based on the type of the communication action; determining whether the user has responded to the first communication from received sensor data; and enabling a second communication based on the determination of whether the user has responded to the communication.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60Q 1/08 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights adjustable, e.g. remotely-controlled from inside vehicle automatically
B60Q 5/00 - Arrangement or adaptation of acoustic signal devices
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G05D 1/02 - Control of position or course in two dimensions
Systems and methods described herein relate to LIDAR systems and their operation. An example method includes partitioning a plurality of light-emitter devices into a plurality of groups. Each light-emitter device is associated with a given group of the plurality of groups. The method also includes selecting a group from the plurality of groups according to a predetermined group order and selecting one or more light-emitter devices from the plurality of light-emitter devices of the selected group according to a firing order. The method yet further includes, at a predetermined shot dither time, causing the selected light- emitter device to emit at least one light pulse. The predetermined shot dither time is based on a shot dither schedule. The method may additionally include repeating the method to provide a complete scan in which each light-emitter device of the plurality of light-emitter devices has emitted at least one light pulse.
ABSTRACT OF THE DISCLOSURE The technology relates to detecting possible imaging sensor occlusion. In one example, a system including an imaging sensor and one or more processors may be configured to capture first image data using the imaging sensor. The one or more processors may encode the first image data into an uncompressed image file and generate a compressed image file based on the uncompressed image file. The file size of the compressed image file may be determined and based on the file size of the compressed image file, the system may determine that the imaging sensor is possibly occluded. Date Recue/Date Received 2021-08-11
A circuit for performing computations for a neural network comprising multiple neural network (NN) layers. The circuit includes a processing device that provides programming data for performing the computations and a core in data communication with the processing device to receive the programming data. The core includes activation memory that stores inputs for a layer and parameter memory that stores parameters for a first NN layer. The core also includes a rotation unit that rotates accessing the inputs from the activation memory based on the programming data and a computation unit that receives a respective input and a parameter for the first NN layer and generates an output of the first NN layer using the input and the parameter. The core also includes a crossbar unit that causes the output to be stored, in the activation memory, in accordance with a bank assignment pattern.
13.
MULTIPLE DESTINATION TRIPS FOR AUTONOMOUS VEHICLES
Aspects of the disclosure relate to a method of managing a fleet of autonomous vehicles providing trip services. The method includes receiving information identifying an intermediate destination and a final destination for a trip. In this example, the intermediate destination is a destination where an autonomous vehicle will drop off and wait for a passenger in order to continue the trip, and the final destination is a destination where the trip ends. The method also includes determining an amount of waiting time the vehicle is likely to be waiting for the passenger at the intermediate destination, determining how a vehicle of the fleet of autonomous vehicles should spend the amount of waiting time, and sending an instruction to the vehicle, based on the determination of how the vehicle should spend the amount of waiting time.
The technology relates to partially redundant equipment architectures for vehicles able to operate in an autonomous driving mode. Aspects of the technology employ fallback configurations, such as two or more fallback sensor configurations that provide some minimum amount of field of view (FOV) around the vehicle. For instance, different sensor arrangements are logically associated with different operating domains of the vehicle. Fallback configurations for computing resources and/or power resources are also provided. Each fallback configuration may have different reasons for being triggered (1408), and may result in different types of fallback modes of operation. Triggering conditions may relate, e.g., to a type of failure, fault or other reduction in component capability, the current driving mode, environmental conditions in the vicinity of vehicle or along a planned route, or other factors. Fallback modes may involve altering a previously planned trajectory, altering vehicle speed, and/or altering a destination of the vehicle.
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
15.
SYSTEMS FOR IMPLEMENTING FALLBACK BEHAVIORS FOR AUTONOMOUS VEHICLES
Aspects of the disclosure relate to controlling a vehicle 100 in an autonomous driving mode. The system includes a plurality of sensors configured to generate sensor data. The system also includes a first computing system 110 configured to generate trajectories using the sensor data and send the generated trajectories to a second computing system 120. The second computing system is configured to cause the vehicle to follow a received trajectory. The system also includes a third computing system 130 configured to, when there is a failure of the first computer system, generate and send trajectories to the second computing system based on whether a vehicle is located on a highway or a surface street.
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
Systems and methods described herein relate to the manufacture of optical elements and optical systems. An example system may include an optical component configured to direct light from a light source to illuminate a photoresist material at a desired angle and to expose at least a portion of an angled structure in the photoresist material, where the photoresist material overlays at least a portion of a top surface of a substrate. The optical component includes a container containing an light-coupling material that is selected based in part on the desired angle. The optical component also includes a mirror arranged to reflect at least a portion of the light to illuminate the photoresist material at the desired angle.
The present disclosure relates to systems and methods that provide information about a scene based on a time-of-flight (ToF) sensor and a structured light pattern. In an example embodiment, a sensor system could include at least one ToF sensor configured to receive light from a scene. The sensor system could also include at least one light source configured to emit a structured light pattern and a controller that carries out operations. The operations include causing the at least one light source to illuminate at least a portion of the scene with the structured light pattern and causing the at least one ToF sensor to provide information indicative of a depth map of the scene based on the structured light pattern.
Example embodiments relate to LIDAR systems with multi-faceted mirrors. An example embodiment includes a LIDAR system. The system includes a multi-faceted mirror that includes a plurality of reflective facets, which rotates about a first rotational axis. The system also includes a light emitter configured to emit a light signal toward one or more regions of a scene. Further, the system includes a light detector configured to detect a reflected light signal. In addition, the system includes an optical window positioned between the multi-faceted mirror and the one or more regions of the scene such that light reflected from one or more of the reflective facets is transmitted through the optical window. The optical window is positioned such that the optical window is non-perpendicular to the direction toward which the light emitted along the optical axis is directed for all angles of the multi-faceted mirror.
Example embodiments relate to underbody radar units. An example radar system may involve a set of radar units coupled to an underbody of a vehicle such that each radar unit has a field of view below a bumper line of the vehicle. The set of radar units may include a first radar unit configured to measure an environment of the vehicle in a first direction and a second radar unit configured to measure the environment of the vehicle in a second direction. The second direction differs from the first direction. In some implementations, the first radar unit is positioned proximate a front bumper of the vehicle, and the second radar unit is positioned proximate a back bumper of the vehicle. Other example configurations may involve using more or fewer radar units coupled to the underbody of a vehicle.
Examples described may be related to an imaging sensor used by a vehicle, including a light sensor. The light sensor comprises a plurality of cells aligned in a plurality of horizontal rows and a plurality of vertical columns. The apparatus further includes an optical system configured to provide the light sensor with a field of view of an external environment of the apparatus. Additionally, the system includes a processing unit configured to: divide the plurality of horizontal rows of the light sensor into one or more enabled rows and one or more disabled rows; obtain image data from the light sensor by sampling one or more cells in the one or more enabled rows; and store the received image data in a memory.
H04N 25/441 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading contiguous pixels from selected rows or columns of the array, e.g. interlaced scanning
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
H04N 25/76 - Addressed sensors, e.g. MOS or CMOS sensors
An optical system for a vehicle may be configured with a plurality of camera sensors. Each camera sensor may be configured to create respective image data of a respective field of view. The optical system is further configured with a plurality of image processing units coupled to the plurality of camera sensors. The image processing units are configured to compress the image data captured by the camera sensors. A computing system is configured to store the compressed image data in a memory. The computing system is further configured with a vehicle-control processor configured to control the vehicle based on the compressed image data. The optical system and the computing system can be communicatively coupled by a data bus.
Examples described may related to an imaging sensor used by a vehicle, including, a light sensor. The light sensor comprises a plurality of cells aligned in a plurality of.horizontal rows and a plurality of vertical columns; The apparatus further includes an optical system configured to provide the light sensor with a field of view of an external environment of the apparatus. Additionally, the system includes, a processing unit configured to: divide die plurality of horizontal rows. of the light sensor into one or more enabled rows and one or more disabled rows; obtain image data from the light sensor by sampling one or more cells in the one or more enabled rows: and.store the received linage data in a memory.
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
23.
SINGLE OPTIC FOR LOW LIGHT AND HIGH LIGHT LEVEL IMAGING
The present disclosure relates to multiple view optical systems. An example optical system includes at least one primary optical element configured to receive incident light from a scene and a plurality of relay mirrors optically coupled to the at least one primary optical element. The optical system also includes a lens optically coupled to the plurality of relay mirrors, and an image sensor configured to receive focused light from the lens. The image sensor includes a first light-sensitive area and a second light-sensitive area. The primary optical element, the plurality of relay mirrors, and the lens interact with the incident light to form a first focused light portion and a second focused light portion. The first focused light portion forms a first image portion of the scene on the first light-sensitive area and the second focused light portion forms a second image portion of the scene on the second light-sensitive area.
H04N 23/55 - Optical parts specially adapted for electronic image sensors; Mounting thereof
G03B 30/00 - Camera modules comprising integrated lens units and imaging units, specially adapted for being embedded in other devices, e.g. mobile phones or vehicles
H04N 23/95 - Computational photography systems, e.g. light-field imaging systems
Example embodiments relate to multiple operating modes to expand dynamic range. An example embodiment includes a camera system. The camera system may include a first image sensor having a first dynamic range corresponding to a first range of luminance levels in a scene. The system may also include a second image sensor having a second dynamic range corresponding to a second range of luminance levels in the scene. The camera system may further include a processor coupled to the first image sensor and the second image sensor. The processor may be configured to execute instructions to identify objects of a first type in a first image of the scene captured by the first image sensor and identify objects of a second object type in a second image of the scene captured by the second image sensor.
An antenna includes a plurality of waveguide antenna elements arranged in a first array configured to operate with a first polarization. The antenna also includes a plurality of waveguide output ports arranged in a second array configured to operate with a second polarization. The second polarization is different from the first polarization. The antenna further includes a polarization-modification layer with channels defined therein. The polarization-modification layer is disposed between the waveguide antenna elements and the waveguide output ports. The channels are oriented at a first angle with respect to the waveguide antenna elements and at a second angle with respect to the waveguide output ports. The channels are configured to receive input electromagnetic waves having the first polarization and transmit output electromagnetic waves having a first intermediate polarization. The waveguide output ports are configured to receive input electromagnetic waves and radiate electromagnetic waves having the second polarization.
H01Q 1/24 - Supports; Mounting means by structural association with other equipment or articles with receiving set
G01S 7/03 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group - Details of HF subsystems specially adapted therefor, e.g. common to transmitter and receiver
G05D 1/02 - Control of position or course in two dimensions
H01Q 21/24 - Combinations of antenna units polarised in different directions for transmitting or receiving circularly and elliptically polarised waves or waves linearly polarised in any direction
26.
USING PREDICTION MODELS FOR SCENE DIFFICULTY IN VEHICLE ROUTING
A route is selected for travel by an autonomous vehicle based on at least a level of difficulty of traversing the driving environment along that route. Vehicle signals, provided by one or more autonomous vehicles, indicating a difficulty associated with traveling a portion of a route are collected and used to predict a most favorable driving route for a given time. The signals may indicate a probability of disengaging from autonomous driving mode, a probability of being stuck for an unduly long time, traffic density, etc. A difficulty score may be computed for each road segment of a route, and then the scores of all of the road segments of the route are added together. The scores are based on number of previous disengagements, previous requests for remote assistance, unprotected left or right turns, whether parts of the driving area are occluded, etc. The difficulty score is used to compute a cost for a particular route, which may be compared to costs computed for other possible routes. Based on such information, a route may be selected.
The present disclosure relates to systems and methods operable to provide point cloud information about an environment based on reconfigurable spatial light emission patterns and reconfigurable light detector arrangements that correspond to the light emission patterns. Additionally, a LIDAR device with a plurality of light emitters and photodetectors may be operated in a first mode of operation or a second mode of operation. The first mode of operation could be a normal mode of operation. The second mode of operation could be a failsafe mode of operation that is used when a fault condition is detected.
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 7/483 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group - Details of pulse systems
G05D 1/02 - Control of position or course in two dimensions
28.
USING DISCOMFORT FOR SPEED PLANNING FOR AUTONOMOUS VEHICLES
Aspects of the disclosure relate to controlling a first vehicle in an autonomous driving mode. While doing so, a second vehicle may be identified. Geometry for a future trajectory of the first vehicle may be identified, and an initial allowable discomfort value may be identified. Determining a speed profile for the geometry that meets the value may be attempted by determining a discomfort value for the speed profile based on a set of factors relating to at least discomfort of a passenger of the first vehicle and discomfort of a passenger of the second vehicle. When a speed profile that meets the value cannot be determined, the value may be adjusted until a speed profile that meets the value is determined. The speed profile that meets an adjusted value is used to control the first vehicle in the autonomous driving mode.
Aspects of the disclosure provide for a system for a power over data line (PoDL) system. The system includes a ground plane (104) that has a cutout (116). In addition, the system includes an alternating current (AC) capacitor pad (112) configured to establish a bidirectional data channel. The AC capacitor pad is positioned in the cutout of the ground plane. Similarly, a PoDL pad (106) connected to one or more inductors and a direct current (DC) power source is positioned in the cutout of the ground plane and is in series with the AC capacitor pad.
An example method includes making a first determination that a load of a cooling system of a vehicle is expected to increase and become greater than a capacity of the cooling system; operating, in response to making the first determination, the vehicle in a first mode where a combustion engine and an electric motor operate such that a charge level of a power supply of the vehicle increases or is maintained above a threshold charge level; making, after operating the vehicle in the first mode, a second determination that the load of the cooling system has become greater than the capacity, of the cooling system; and operating, in response to making the second determination, the vehicle in a second mode where the combustion engine and the electric motor operate such that the charge level of the power supply decreases or is maintained below the threshold charge level.
B60K 11/06 - Arrangement in connection with cooling of propulsion units with air cooling
B60K 6/24 - Arrangement or mounting of plural diverse prime-movers for mutual or common propulsion, e.g. hybrid propulsion systems comprising electric motors and internal combustion engines the prime-movers consisting of electric motors and internal combustion engines, e.g. HEVs characterised by apparatus, components or means specially adapted for HEVs characterised by the combustion engines
B60K 6/26 - Arrangement or mounting of plural diverse prime-movers for mutual or common propulsion, e.g. hybrid propulsion systems comprising electric motors and internal combustion engines the prime-movers consisting of electric motors and internal combustion engines, e.g. HEVs characterised by apparatus, components or means specially adapted for HEVs characterised by the motors or the generators
B60K 6/28 - Arrangement or mounting of plural diverse prime-movers for mutual or common propulsion, e.g. hybrid propulsion systems comprising electric motors and internal combustion engines the prime-movers consisting of electric motors and internal combustion engines, e.g. HEVs characterised by apparatus, components or means specially adapted for HEVs characterised by the electric energy storing means, e.g. batteries or capacitors
31.
DETECTING AND RESPONDING TO TRAFFIC REDIRECTION FOR AUTONOMOUS VEHICLES
The technology relates to controlling a vehicle in an autonomous driving mode, the method. For instance, a vehicle 100 may be maneuvered in the autonomous driving mode using pre-stored map information identifying traffic flow directions. Data may be received from a perception system of the vehicle identifying objects in an external environment of the vehicle related to a traffic redirection not identified the map information. The received data may be used to identify one or more corridors 910, 920 of a traffic redirection. One of the one or more corridors may be selected based on a direction of traffic flow through the selected corridor. The vehicle may then be controlled in the autonomous driving mode to enter and follow the selected one of the one or more corridors based on the determined direction of flow of traffic through each of the one or more corridors.
The technology relates to controlling a vehicle in an autonomous driving mode. For example, sensor data identifying a plurality of objects may be received. Pairs of objects of the plurality of objects may be identified. For each identified pair of objects of the plurality of objects, a similarity value which indicates whether the objects of that identified pair of objects can be responded to by the vehicle as a group may be detennined. The objects of one of the identified pairs of objects may be clustered together based on the similarity score. The vehicle may be controlled in the autonomous mode by responding to each object in the cluster in a same way.
A vehicle having a communication system is disclosed. The system includes two electrical couplings, coupled by way of a rotary joint. Each electrical coupling includes an interface waveguide configured to couple to external signals. Each electrical coupling also includes a waveguide section configured to propagate electromagnetic signals between the interface waveguide and the rotary joint. Additionally, the rotary joint is configured to allow one electrical coupling to rotate with respect to the other electrical coupling. An axis of rotation of the rotary joint is defined by a center of a portion of the waveguides. Yet further, the rotary joint allows electromagnetic energy to propagate between the waveguides of the electrical couplings.
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
CA 03080002 2020-03-27 (12) INTERNATIONAL APPLICATION PUBLISHED UNDER THE PATENT COOPERATION TREATY (PCT) (19) World Intellectual Property 1 11111 1 111111 11 111111 1 11 11111 1 111 1111 1 1 11 1111 11111 111 1 111 11111 11 1111111111 1 11 1111 Organization International Bureau (10) International Publication Number (43) International Publication Date WO 2019/067283 Al 04 April 2019 (04.04.2019) WIPO I PCT (51) International Patent Classification: AO, AT, AU, AZ, BA, BB, BG, BH, BN, BR, BW, BY, BZ, GO6T 7/80 (201'7.01) GOOK 9/46 (2006.01) CA, CH, CL, CN, CO, CR, CU, CZ, DE, DJ, DK, DM, DO, DZ, EC, EE, EG, ES, FI, GB, GD, GE, GH, GM, GT, HN, (21) International Application Number: HR, HU, ED, EL, IN, IR, IS, JO, JP, KE, KG, KH, KN, KP, PCT/U52018/051690 KR, KW, KZ, LA, LC, LK, LR, LS, LU, LY, MA, MD, ME, (22) International Filing Date: MG, MK, MN, MW, MX, MY, MZ, NA, NG, NI, NO, NZ, 19 September 2018 (19.09.2018) OM, PA, PE, PG, PH, PL, PT, QA, RO, RS, RU, RW, SA, SC, SD, SE, SG, SK, SL, SM, ST, SV, SY, TH, TJ, TM, TN, (25) Filing Language: English TR, TT, TZ, UA, UG, US, UZ, VC, VN, ZA, ZM, ZW. (26) Publication Language: English (84) Designated States (unless otherwise indicated, for every (30) Priority Data: kind of regional protection available): ARIPO (BW, GH, 15/720,9'79 29 September 201'7 (29.09.201'7) US GM, KE, LR, LS, MW, MZ, NA, RW, SD, SL, ST, SZ, TZ, UG, ZM, ZW), Eurasian (AM, AZ, BY, KG, KZ, RU, TJ, (71) Applicant: WAYMO LLC [US/US]; 1600 Amphitheatre TM), European (AL, AT, BE, BG, CH, CY, CZ, DE, DK, Parkway, Mountain View, CA 94043 (US). EE, ES, FI, FR, GB, GR, HR, HU, EE, IS, IT, LT, LU, LV, (72) Inventors: WENDEL, Andreas; 1600 Amphitheatre Park- MC, MK, MT, NL, NO, PL, PT, RO, RS, SE, SI, SK, SM, way, Mountain View, CA 94043 (US). GRABE, Volker; TR), OAPI (BF, BJ, CF, CG, CI, CM, GA, GN, GQ, GW, 1600 Amphitheatre Parkway, Mountain View, CA 94043 KM, ML, MR, NE, SN, TD, TG). (US). DITTMER, Jeremy; 1600 Amphitheatre Parkway, Mountain View, CA 94043 (US). MORRISS, Zachary; Published: 1600 Amphitheatre Parkway, Mountain View, CA 94043 ¨ with international search report (Art. 21(3)) (US). ¨ before the expiration of the time limit for amending the claims and to be republished in the event of receipt of = (74) Agent: VELZEN, Andrew, H.; McDonnell Boehnen amendments (Rule 48.2(h)) bert &, Berghoff LLP, 300 South Wacker Drive, Chicago, IL 60606 (US). = (81) Designated States (unless otherwise indicated, for every kind of national protection available): AE, AG, AL, AM, (54) Title: TARGET, METHOD, AND SYSTEM FOR CAMERA CALEBRATION 700 m __________________________________________________________________ 702 Record a calibration image of a target using a camera __________________________________________________________________ 704 Determine locations and identifications of one or more fiducial markers in the calibration image __________________________________________________________________ 706 1-1 Based on the determined locations and identifications, calibrate the camera GC FIG. 7 c, (57) Abstract: The present disclosure relates to a target, a method, and a system for calibrating: a camera. One example embodiment includes a target. The target includes a first pattern of fiducial markers. The target also includes a second pattern of -fiducial markers. r..1 The first pattern of fiducial markers is a scaled version of the second pattern of fiducial markers, such that a calibration image captured 0.õ,, of the target simulates multiple images of a single pattern captured at multiple calibration perspectives.
Aspects of the present disclosure relate to vehicle systems including one or more control computing devices configured to send commands to one or more actuators of a vehicle 100 in order to control deceleration, acceleration, and steering. The vehicle may include user input devices for allowing a driver to control the one or more actuators in order to control deceleration, acceleration, and steering. The computing devices are configured to operate in a manual driving mode where commands from the control computing devices are invalidated and ignored by the actuators, a first autonomous driving mode where the control computing devices are configured to send the commands to control the actuators and inputs from the user input devices are prioritized over the commands; and a second autonomous driving mode wherein the control computing devices are configured to send the commands to control the actuators and inputs from commands are prioritized over inputs from the user input devices.
Aspects of the disclosure relate to stopping a vehicle. For instance, a vehicle is controlled in an autonomous driving mode by generating first commands for acceleration control and sending the first commands to an acceleration and/or steering actuator of an acceleration system of the vehicle in order to cause the vehicle to accelerate. Acceleration and/or orientation of the vehicle is monitored while the vehicle is being operated in an autonomous driving mode. The monitored acceleration and/or orientation is compared with the first commands. An error with the acceleration and/or steering system is determined based on the comparison. When the error is determined, the vehicle is controlled in the autonomous driving mode by generating second commands which do not require any acceleration and/or steering.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
37.
SYNCHRONIZED SPINNING LIDAR AND ROLLING SHUTTER CAMERA SYSTEM
One example system comprises a LIDAR sensor that rotates about an axis to scan an environment of the LIDAR sensor. The system also comprises one or more cameras that detect external light originating from one or more external light sources. The one or more cameras together provide a plurality of rows of sensing elements. The rows of sensing elements are aligned with the axis of rotation of the LIDAR sensor. The system also comprises a controller that operates the one or more cameras to obtain a sequence of image pixel rows. A first image pixel row in the sequence is indicative of external light detected by a first row of sensing elements during a first exposure time period. A second image pixel row in the sequence is indicative of external light detected by a second row of sensing elements during a second exposure time period.
Examples relating to vehicle motion detection using radar technology are described. An example method may be performed by a computing system and may involve receiving, from at least one radar sensor mourned on an autonomous vehicle, radar data representative of an environment of the vehicle. The method may involve, based on the radar data, detecting at least one scatterer present in the environment and. making a determination of a likelihood, that the at 'least one scatterer is stationary with respect to the vehicle. The method may involve, in- response to the likelihood being at least equal to a predefined confidence threshold, calculating a velocity of the vehicle based on the radar data, where calculating the velocity comprises one of: determining an indication that the vehicle is stationary, and determining an angular and linear velocity of the vehicle. And the method may involve controlling the vehicle teed on the calculated velocity.
One example system includes a light source that emits light. The system also includes a waveguide that guides the emitted light from a first side of the waveguide toward a second side of the waveguide opposite the first side. The waveguide has a third side extending between the first side and the second side. The system also includes a mirror that reflects the guided light toward the third side of the waveguide. At least a portion of the reflected light propagates out of the waveguide toward a scene. The system also includes a light detector, and a lens that focuses light from the scene toward the waveguide and the light detector.
Aspects of the disclosure relate to arranging a stopping location for a driverless vehicle. As an example, a method of doing so may include receiving a request for a vehicle (101, 101 A) from a client computing device (120, 130), wherein the request identifies a first location. Pre-stored map information and the first location are used to identify a recommended point according to a set of heuristics. Each heuristic of the set of heuristics has a ranking such that the recommended point corresponds to a location that satisfies at least one of the heuristics having a first rank and such that no other location satisfies any other heuristic of the set of heuristics having a higher rank than the first rank. The recommended point is then provided the client computing device and used to dispatch a vehicle to the stop location.
Aspects of the disclosure provide systems and methods for confirming the identity of a passenger and changing destination of a vehicle. This may include receiving dispatching instructions to pick up a first passenger at a pickup location and to drop off the first passenger at a first destination as well as authentication information for authenticating a first client computing device of the first passenger. Once the client device is authenticated and a second passenger enters the vehicle, the vehicle is maneuvered towards the first destination. While doing so, a location of the vehicle is compared to location information received from the client computing device. A notification is sent to a dispatching server based on the comparison and a second destination location is received in response. The vehicle is then maneuvered towards the second destination instead of the first destination.
B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
Aspects of the present disclosure relate to context aware stopping of a vehicle 100 without a driver. As an example, after a passenger has entered the vehicle, the vehicle is maneuvered by one or more processors 120 in an autonomous driving mode towards a destination location along a route. The route is divided into two or more stages. A signal is received by the one or more processors. The signal indicates that the passenger is requesting that the vehicle stop or pull over. In response to the signal, the one or more processors determine a current stage of the route based on a current distance of the vehicle from a pickup location where the passenger entered the vehicle or a current distance of the vehicle from the destination location. The one or more processors then stop the vehicle in accordance with the determined current stage.
Aspects of the disclosure provide for stopping a vehicle 100 to pick up or drop off a passenger at a location. An example methods includes maneuvering a vehicle towards the location 680. An amount of time for the passenger to enter or exit a vehicle is estimated. Once the vehicle is a predetermine distance from the location, a set of possible places to stop the vehicle are determined. For each place of the set of possible places, a corresponding threshold value is determined. For each place of the set of possible places, the estimated amount of time is compared to the corresponding threshold value. A particular one of the set of possible places is identified based on the comparisons and the vehicle at the particular one to allow the passenger to enter or exit the vehicle.
B60W 40/10 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to vehicle motion
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
44.
RECOGNIZING ASSIGNED PASSENGERS FOR AUTONOMOUS VEHICLES
Aspects of the disclosure provide systems and methods for recognizing an assigned passenger. For instance, dispatching instructions to pick up a passenger at a pickup location 770 are received. The instructions include authentication information for authenticating a client computing device 420, 430 associated with the passenger. A vehicle 100, 100A is maneuvered in an autonomous driving mode towards the pickup location. The client device is then authenticated. After authentication, a set of pedestrians within a predetermined distance 702 of the vehicle are identified from sensor information generated by a sensor of the vehicle and location information is received over a period of time from the client device. The received location information is used to estimate a velocity of the passenger. This estimated velocity is used to identify a subset of set of pedestrians that is likely to be the passenger. The vehicle is stopped to allow the passenger to enter the vehicle based on the subset.
The present disclosure relates to systems and methods that include a monolithic, single- chip receiver. An example system includes a plurality of macropixels, each made up of an array of single photon avalanche diodes (SPADs). The system also includes a plurality of pipelined adders communicatively coupled to a respective portion of the plurality of macropixels. The system additionally includes a controller configured to carry out operations. The operations include during a listening period, receiving, at each pipelined adder of the plurality of pipelined adders, respective photosignals from the respective portion of the plurality of macropixels. The operations also include causing each pipelined adder of the plurality of pipelined adders to provide an output that includes a series of frames that provide an average number of SPADs of the respective portion of the plurality of macropixels that were triggered during a given listening period.
A computing system may operate a LIDAR device to emit and detect light pulses in accordance with a time sequence including standard detection period(s) that establish a nominal detection range for the LIDAR device and extended detection period(s) having durations longer than those of the standard detection period(s). The system may then make a determination that the LIDAR detected return light pulse(s) during extended detection period(s) that correspond to particular emitted light pulse(s). Responsively, the computing system may determine that the detected return light pulse(s) have detection times relative to corresponding emission times of particular emitted light pulse(s) that are indicative of one or more ranges. Given this, the computing system may make a further determination of whether or not the one or more ranges indicate that an object is positioned outside of the nominal detection range, and may then engage in object detection in accordance with the further determination.
G01S 7/491 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group - Details of non-pulse systems
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G05D 1/02 - Control of position or course in two dimensions
47.
METHODS AND SYSTEMS FOR VEHICLE OCCUPANCY CONFIRMATION
Example implementations relate to vehicle occupancy confirmation. An example implementation involves receiving, at a computing system from a camera positioned inside a vehicle, an image representing an occupancy within the vehicle. The implementation further involves, responsive to receiving the image, displaying the image on a display interface, and receiving an operator input confirming the occupancy meets a desired occupancy. The implementation additionally includes transmitting an occupancy confirmation from the computing system to the vehicle. In some instances, in response to receiving the occupancy confirmation, the vehicle executes an autonomous driving operation.
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
B60R 11/04 - Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
B60R 21/015 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, e.g. for disabling triggering
B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
48.
METHODS AND SYSTEMS FOR VEHICLE OCCUPANCY CONFIRMATION
Example implementations relate to vehicle occupancy confirmation. An example implementation involves receiving, at a computing system from a camera positioned inside a vehicle, an image representing an occupancy within the vehicle. The implementation further involves, responsive to receiving the image, displaying the image on a display interface, and receiving an operator input confirming the occupancy meets a desired occupancy. The implementation additionally includes transmitting an occupancy confirmation from the computing system to the vehicle. In some instances, in response to receiving the occupancy confirmation, the vehicle executes an autonomous driving operation.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
B60R 1/29 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area inside the vehicle, e.g. for viewing passengers or cargo
B60R 21/015 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, e.g. for disabling triggering
B60R 22/48 - Control systems, alarms, or interlock systems, for the correct application of the belt or harness
49.
METHODS AND SYSTEMS FOR PROVIDING REMOTE ASSISTANCE TO A VEHICLE
Examples described may enable provision of remote assistance for an autonomous vehicle. An example method includes a computing system operating in a rewind mode. In the rewind mode, the system may be configured to provide information to a remote assistance operated based on a remote-assistance triggering criteria being met. When the triggering criteria is met, the remote assistance system may provide data from the time leading up to when the remote-assistanc e triggering criteria was met that was capture of the environment of autonomous vehicle to the remote assistance operator. Based on viewing the data, the remote assistance operator may provide and input to the system that causes a command to be issued to the autonomous vehicle.
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
50.
LIGHT DETECTION AND RANGING (LIDAR) DEVICE RANGE ALIASING RESILIENCE BY MULTIPLE HYPOTHESES
A computing system may operate a LIDAR device to emit light pulses in accordance with a time sequence including a time-varying dither. The system may then determine that the LIDAR detected return light pulses during corresponding detection periods for each of two or more emitted light pulses. Responsively, the system may determine that the detected return light pulses have (i) detection times relative to corresponding emission times of a plurality of first emitted light pulses that are indicative of a first set of ranges and (ii) detection times relative to corresponding emission times of a plurality of second emitted light pulses that are indicative of a second set of ranges. Given this, the system may select between using the first set of ranges as a basis for object detection and using the second set of ranges as a basis for object detection, and may then engage in object detection accordingly.
In some implementations, an image classification system of an autonomous or semi-autonomous vehicle is capable of improving multi-object classification by reducing repeated incorrect classification of objects that are considered rarely occurring objects. The system can include a common instance classifier that is trained to identify and recognize general objects (e.g., commonly occurring objects and rarely occurring objects) as belonging to specified object categories, and a rare instance classifier that is trained to compute one or more rarity scores representing likelihoods that an input image is correctly classified by the common instance classifier. The output of the rare instance classifier can be used to adjust the classification output of the common instance classifier such that the likelihood of input images being incorrectly classified is reduced.
An example Printed Circuit Board (PCB) may include a via extending through at least one layer of the PCB. The PCB may also include a first catch pad connected to the via and located within a first, metal layer of the PCB. The first catch pad may have a first size. The PCB may further include a second catch pad connected to the via and located within a second metal layer of the PCB, The second catch pad may have a second size greater than the first size. The second catch pad may overlap horizontally with a portion of a metallic feature in the first metal layer to obstruct light incident on a first side of the PCB from transmission to a second side of the PCB through a region of dielectric material near the via.
The technology relates to camera systems for vehicles having an autonomous driving mode. An example system includes a first camera mounted on a vehicle 100 in order to capture images of the vehicle's environment. The first camera 300 has a first exposure time and being without an ND filter. The system also includes a second camera 350 mounted on the vehicle in order to capture images of the vehicle's environment and having an ND filter. The system also includes one or more processors configured to capture images using the first camera and the first exposure time, capture images using the second camera and the second exposure time, use the images captured using the second camera to identify illuminated objects, use the images captured using the first camera to identify the locations of objects, and use the identified illuminated objects and identified locations of objects to control the vehicle in an autonomous driving mode.
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
B60R 11/04 - Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
54.
EARLY BOARDING OF PASSENGERS IN AUTONOMOUS VEHICLES
The technology relates to actively looking for an assigned passenger prior to a vehicle 100 reaching a pickup location (represented by marker 770). For instance, information identifying the pickup location and client device information for authenticating the assigned passenger is received. Sensor data is received from a perception system (172) of the vehicle identifying objects in an environment of the vehicle. When the vehicle is within a predetermined distance (represented by distance bar 772) of the pickup location, authenticating a client device (420, 430) using the client device information is attempted. When the client device has been authenticated, the sensor data is used to determine whether a pedestrian is within a first threshold distance (D1) of the vehicle. When a pedestrian (650, 652, 654, 656) is determined to be within the first threshold distance of the vehicle, the vehicle is stopped prior to reaching the pickup location, to wait for the pedestrian within the first threshold distance of the vehicle to enter the vehicle.
The present disclosure relates to systems and methods that facilitate light detection and ranging operations. An example transmit block includes at least one substrate with a plurality of angled facets. The plurality of angled facets provides a corresponding plurality of elevation angles. A set of angle differences between adjacent elevation angles includes at least two different angle difference values. A plurality of light-emitter devices is configured to emit light into an environment along the plurality of elevation angles toward respective target locations so as to provide a desired resolution and/or a respective elevation angle. The present disclosure also relates to adjusting shot power and a shot schedule based on the desired resolution and/or a respective elevation angle.
The present disclosure relates to systems and methods that facilitate light detection and ranging operations. An example transmit block includes at least one substrate with a plurality of angled facets. The plurality of angled facets provides a corresponding plurality of elevation angles. A set of angle differences between adjacent elevation angles includes at least two different angle difference values. A plurality of light-emitter devices is configured to emit light into an environment along the plurality of elevation angles toward respective target locations so as to provide a desired resolution and/or a respective elevation angle. The present disclosure also relates to adjusting shot power and a shot schedule based on the desired resolution and/or a respective elevation angle.
57.
USING WHEEL ORIENTATION TO DETERMINE FUTURE HEADING
The technology relates to determining a future heading of an object. In order to do so, sensor data, including information identifying a bounding box (420) representing an object in a vehicle's environment and locations of sensor data points corresponding to the object, may be received. Based on dimensions of the bounding box (420), an area (610) corresponding to a wheel of the object may be identified. An orientation of the wheel may then be estimated based on the sensor data points having locations within the area (610) by fitting the sensor data points within the area (610) to a plane (710). The estimation may then be used to determine a future heading of the object.
In one example, a LIDAR device includes a light sources that emits light and a transmit lens that directs the emitted light to illuminate a region of an environment with a field-of-view defined by the transmit lens. The LIDAR device also includes a receive lens that focuses at least a portion of incoming light propagating from the illuminated region of the environment along a predefined optical path. The LIDAR device also includes an array of light detectors positioned along the predefined optical path. The LIDAR device also includes an offset light detector positioned outside the predefined optical path. The LIDAR device also includes a controller that determines whether collected sensor data from the array of light detectors includes data associated with another light source different than the light source of the device based on output from the offset light detector.
The present disclosure relates to optical systems, specifically light detection and ranging (LIDAR) systems. An example optical system includes a laser light source operable to emit laser light along a first axis and a mirror element with a plurality of reflective surfaces. The mirror element is configured to rotate about a second axis. The plurality of reflective surfaces is disposed about the second axis. The mirror element and the laser light source are coupled to a base structure, which is configured to rotate about a third axis. While the rotational angle of the mirror element is within an angular range, the emitted laser light interacts with both a first reflective surface and a second reflective surface of the plurality of reflective surfaces and is reflected into the environment by the first and second reflective surfaces.
Aspects of the disclosure relate to adjusting a virtual camera's orientation when a vehicle is making a turn. One or more computing devices may receive the vehicle's original heading prior to making the turn and the vehicle's current heading. Based on the vehicle's original heading and the vehicle's current heading, the one or more computing devices may determine an angle of a turn the vehicle is performing and The one or more computing devices may determine a camera rotation angle and adjust the virtual camera's orientation relative to the vehicle to an updated orientation by rotating the virtual camera by the camera rotation angle and generate a video corresponding to the virtual camera' s updated orientation. The video may be displayed on the display by the one or more computing devices.
B60R 1/28 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
Described herein are methods and systems for protecting a light detection and ranging (LIDAR) device against external light that is originated at a light source other than a light source of the LIDAR device and that is being emitted towards the LIDAR device. In particular, the LIDAR device may be equipped with a mitigation system that includes an interference filter, an absorptive filter, an adaptive filter, and/or a spatial filter. Additionally or alternatively, the LIDAR device may be operated to carry out reactive and/or proactive mitigation operations. For example, the LIDAR device may be operated to vary over time characteristics with which light is being emitted and to only detect light having characteristics that match the characteristics with which light is being emitted. In another example, the LIDAR device may be operated to activate a shutter to block the external light from being detected by the LIDAR device.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
B60R 21/013 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
A route (660) for a trip to a destination is generated using map information. A set of no-go roadway segments, where a vehicle (100) is not able to drive in an autonomous mode, relevant to the route is identified from the map information. A local region (800) around a current location of the vehicle is determined. A local map region (900) including roadway segments of the map information that correspond to locations within the local region is determined. The set of the plurality of no-go roadway segments is filtered from the roadway segments of the local map region. A cost value is assigned to each roadway segment of the filtered roadway segments of the local map region. Any assigned cost values are used to determining a plan for maneuvering the vehicle for a predetermined period into the future. The vehicle is maneuvered according to the plan.
Described herein is a LIDAR device that may include a transmitter, first and second receivers, and a rotating platform. The transmitter may be configured to emit light having a vertical beam width. The first receiver may be configured to detect light at a first resolution while scanning the environment with a first FOV and the second receiver may be configured to detect light at a second resolution while scanning the environment with a second FOV. In this arrangement, the first resolution may be higher than the second resolution, the first FOV may be at least partially different from the second FOV, and the vertical beam width may encompass at least a vertical extent of the first and second FOVs. Further, the rotating platform may be configured to rotate about an axis such that the transmitter and first and second receivers each move based on the rotation.
The present disclosure relates to limitation of noise on light detectors using an aperture. One example embodiment includes a system. The system includes a lens disposed relative to a scene and configured to focus light from the scene onto a focal plane. The system also includes an aperture defined within an opaque material disposed at the focal plane of the lens. The aperture has a cross-sectional area. In addition, the system includes an array of light detectors disposed on a side of the focal plane opposite the lens and configured to intercept and detect diverging light focused by the lens and transmitted through the aperture. A cross- sectional area of the array of light detectors that intercepts the diverging light is greater than the cross-sectional area of the aperture.
Aspects of the disclosure relate to generating a speed plan for an autonomous vehicle. As an example, a vehicle is maneuvered in an autonomous driving mode along a route using pre-stored map information. This information identifies a plurality of keep clear regions where the vehicle should not stop but can drive through in the autonomous driving mode. Each keep clear region of the plurality of keep clear regions is associated with a priority value. A subset of the plurality of keep clear regions is identified based on the route. A speed plan for stopping the vehicle is generated based on the priority values associated with the keep clear regions of the subset. The speed plan identifies a location for stopping the vehicle. The speed plan is used to stop the vehicle in the location.
The present disclosure relates to systems and circuits that may facilitate sub-5 nanosecond laser diode operation. An example system includes a trigger source, a laser diode, a first field effect transistor and a second field effect transistor. The laser diode is coupled to a supply voltage and a drain terminal of the first field effect transistor. A source terminal of the first field effect transistor is coupled to ground and a gate terminal of the first field effect transistor is coupled to the trigger source. A drain terminal of the second field effect transistor is coupled to the supply voltage. A source terminal of the second field effect transistor and a gate terminal of the second field effect transistor are coupled to ground. In an example embodiment, the first field effect transistor and the second field effect transistor comprise gallium nitride (GaN).
The present disclosure relates to systems and circuits that may facilitate sub- 5 nanosecond laser diode operation. An example system includes a trigger source, a laser diode, a first field effect transistor and a second field effect transistor. The laser diode is coupled to a supply voltage and a drain terminal of the first field effect transistor. A source terminal of the first field effect transistor is coupled to ground and a gate terminal of the first field effect transistor is coupled to the trigger source. A drain terminal of the second field effect transistor is coupled to the supply voltage. A source terminal of the second field effect transistor and a gate terminal of the second field effect transistor are coupled to ground. In an example embodiment, the first field effect transistor and the second field effect transistor comprise gallium nitride (GaN).
68.
METHOD AND SYSTEM FOR DETERMINING AND DYNAMICALLY UPDATING A ROUTE AND DRIVING STYLE FOR PASSENGER COMFORT
ABSTRACT The disclosure provides for a method for determining a route for passenger comfort and operating a vehicle according to the determined route. To start, a set of routes from a start location to an end location may be determined. Each route includes one or more portions. For each route of the set of routes, a total motion sickness value is determined based on a sway motion sickness value, a surge motion sickness value, and a heave motion sickness value for each of the given portions. The total motion sickness value for a route reflects a likelihood that a user will experience motion sickness while in a vehicle along the route. A route may then be selected from the set of routes based on the total motion sickness value of each route of the set of routes, and the vehicle may be maneuvered according to the selected route. Date Recue/Date Received 2020-09-14
The disclosure provides for a method for determining a route for passenger comfort and operating a vehicle according to the determined route. To start, a set of routes from a start location to an end location may be determined. Each route includes one or more portions. For each route of the set of routes, a total motion sickness value is determined based on a sway motion sickness value, a surge motion sickness value, and a heave motion sickness value for each of the given portions. The total motion sickness value for a route reflects a likelihood that a user will experience motion sickness while in a vehicle along the route. A route may then be selected from the set of routes based on the total motion sickness value of each route of the set of routes, and the vehicle may be maneuvered according to the selected route.
An optical system and method may relate to capturing extended dynamic range images. In an example embodiment, the optical system may include a lens element configured to receive incident light and a beam splitter optically coupled to the lens element. The beam splitter is configured to separate the incident light into at least a first portion having a first photon flux and a second portion having a second photon flux. The first photon flux is at least an order of magnitude greater than the second photon flux. A controller may be configured to cause a first image sensor to capture a first image of the first portion of the incident light according to first exposure parameters and cause a second image sensor to capture a second image of the second portion of the incident light according to second exposure parameters.
Aspects of the disclosure relate to maneuvering a vehicle. As an example, sensor information identifying a set of objects as well as a set of characteristics for each object of the set of objects is received from a perception system of a vehicle. The set of objects is filtered to remove objects corresponding to vehicles, bicycles, and pedestrians. An object within an expected future path of the vehicle is selected from the filtered set of objects. The object is classified as drivable or not drivable based on the set of characteristics. Drivable indicates that the vehicle can drive over the object without causing damage to the vehicle. The vehicle is maneuvered based on the classification such that when the object is classified as drivable, maneuvering the vehicle includes driving the vehicle over the object by not altering the expected future path of the vehicle.
, , ABS TRACT Aspects of the disclosure relate to testing predictions of an autonomous vehicle relating to another vehicle or object in a roadway. For instance, one or more processors may plan to maneuver a first vehicle autonomously to complete an action and predict that a second vehicle will take a responsive action. The first vehicle is maneuvered towards completing the action in a way that would allow the first vehicle to cancel completing the action without causing a collision between the first vehicle and the second vehicle, and in order to indicate to the second vehicle or a driver of the second vehicle that the first vehicle is attempting to complete the action. Thereafter, when the first vehicle is determined to be able to take the action, the action is completed by controlling the first vehicle autonomously using the determination of whether the second vehicle begins to take the particular responsive action. CA 3029828 2019-11-25
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G05D 1/02 - Control of position or course in two dimensions
73.
ARRANGING PASSENGER PICKUPS FOR AUTONOMOUS VEHICLES
Aspects of the disclosure relate to arranging a pickup between a driverless vehicle and a passenger. For instance, dispatch instructions dispatching the vehicle to a predetermined pickup area in order to pick up the passenger are received by the vehicle which begins maneuvering to the predetermined pickup area. While doing so, the vehicle receives from the passenger' s client computing device the device's location. An indication that the passenger is interested in a fly-by pickup is identified. The fly-by pickup allows the passenger to safely enter the vehicle at a location outside of the predetermined pickup area and prior to the one or more processors have maneuvered the vehicle to the predetermined pickup area. The vehicle determines that the fly-by pickup is appropriate based on at least the location of the client computing device and the indication, and based on the determination, maneuvers itself in order to attempt the fly-by pickup.
The present application describes a method including transmitting at least two radar signals by a radar unit of a vehicle, where a first signal is transmitted from a first location and a second signal is transmitted from a second location. The method also includes receiving a respective reflection signal associated with each of the transmitted signals. Additionally, the method includes determining, by a processor, at least one stationary object that caused a reflection, Further, the method includes, based on the determined stationary object, determining, by the processor, an offset for the radar unit The method yet further includes operating the radar unit based on the determined offset. Furthermore, the method includes controlling an autonomous vehicle based on the radar unit being operated with the determined offset.
B60R 21/0134 - Electrical circuits for triggering safety arrangements in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle
75.
DEVICES AND METHODS FOR A ROTARY JOINT WITH MULTIPLE WIRELESS LINKS
A device is provided that includes a first platform having a first side, and a second platform having a second side positioned within a predetermined distance to the first side. The device also includes an actuator configured to cause a relative rotation between the first platform and the second platform such that the first side of the first platform remains within the predetermined distance to the second side of the second platform. The device also includes a probe mounted to the first platform, and a plurality of probes mounted to the second platform. The device also includes a signal conditioner coupled to the plurality of probes. The signal conditioner may select one of the plurality of probes based on an orientation of the first platform relative to the second platform. The signal conditioner may then to use the selected probe for wireless communication with the probe on the first platform.
A method for controlling a vehicle includes generating, by a primary computing system, a nominal trajectory from a location to achieve a mission goal and a fall back trajectory from the location in order to safely stop the vehicle. The nominal and the fall back are identical between the location and a divergent point and where the nominal and the fall back diverge after the divergent point. The fall back is sent to and received by a secondary computing system. The secondary computing system waits for an updated trajectory from the primary computing systern while controlling the vehicle according to the fall back. When the vehicle reaches a threshold point on the fall back and an updated trajectory has not yet been received by the secondary cornputing system, the secondary computing system continues to control the vehicle according to the fall back in order to safely stop the vehicle.
ABSTRACT 'Fite technology relates to thcilitating transportation services between a user and a vehicle having an autonomous driving mode. For instance, one or more server computing devices having one or more processors .may receive information identifying the current location of the vehicle. The one or more server computing devices rnay determine that the user is likely to want to take a trip to a particular destination based on prior location history for the user. The one or more server computing devices may dispatch the vehicle to cause the vehicle to travel in the autonomous driving mode towards a location of the user. In addition, after dispatching, the one or more server computing devices sending a notification to a client computing device associated with the user indicating that the vehicle is currently available to take the passenger to the particular destination. CA 3010272 2019-10-04
A method is provided that involves identifying a target region of an environment of an autonomous vehicle to be monitored for presence of moving objects. The method also involves operating a first sensor to obtain a scan of a portion of the environment that includes at least a portion of the target region and an intermediate region between the autonomous vehicle and the target region. The method also involves determining whether a second sensor has a sufficiently clear view of the target region based on at least the scan obtained by the first sensor. The method also involves operating the second sensor to monitor the target region for presence of moving objects based on at least a determination that the second sensor has a sufficiently clear view of the target region. Also provided is an autonomous vehicle configured to perform the method.
Aspects of the disclosure provide systems and methods for providing suggested locations for pick up and destination locations. Pick up locations may include locations where an autonomous vehicle (100) can pick up a passenger, while destination locations may include locations where the vehicle can wait for an additional passenger, stop and wait for a passenger to perform some task and return to the vehicle, or for the vehicle to drop off a passenger. As such, a request for a vehicle may be received from a client computing device (120, 130). The request may identify a first location. A set of one or more suggested locations may be selected by comparing the predetermined locations to the first location. The set may be provided to the client computing device.
A side view mirror assembly includes a retention mechanism, a securing mechanism, and a reflective surface. The retention mechanism may be configured to engage at least a portion of a windowsill of a vehicle. The securing mechanism may be connected to the retention mechanism and configured to removably secure the mirror assembly to a surface of the vehicle. The reflective surface may be connected to the retention mechanism.
Systems and methods are described that relate to a light detection and ranging (LIDAR) device. The LIDAR device includes a fiber laser configured to emit light within a wavelength range, a scanning portion configured to direct the emitted light in a reciprocating manner about a first axis, and a plurality of detectors configured to sense light within the wavelength range. The device additionally includes a controller configured to receive target information, which may be indicative of an object, a position, a location, or an angle range. In response to receiving the target information, the controller may cause the rotational mount to rotate so as to adjust a pointing direction of the LIDAR. The controller is further configured to cause the LIDAR to scan a field-of-view (FOV) of the environment. The controller may determine a three-dimensional (3D) representation of the environment based on data from scanning the FOV.
A vehicle is provided that includes one or more wheels positioned at a bottom side of the vehicle. The vehicle also includes a first light detection and ranging device (LIDAR) positioned at a top side of the vehicle opposite to the bottom side. The first LIDAR is configured to scan an environment around the vehicle based on rotation of the first LIDAR about an axis. The first LIDAR has a first resolution. The vehicle also includes a second LIDAR configured to scan a field-of-view of the environment that extends away from the vehicle along a viewing direction of the second LIDAR. The second LIDAR has a second resolution. The vehicle also includes a controller configured to operate the vehicle based on the scans of the environment by the first LIDAR and the second LIDAR.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
83.
VEHICLE WITH MULTIPLE LIGHT DETECTION AND RANGING DEVICES (LIDARS)
A vehicle is provided that includes one or more wheels positioned at a bottom side of the vehicle. The vehicle also includes a first light detection and ranging device (LIDAR) positioned at a top side of the vehicle opposite to the bottom side. The first LIDAR is configured to scan an environment around the vehicle based on rotation of the first LIDAR about an axis. The first LIDAR has a first resolution. The vehicle also includes a second LIDAR configured to scan a field-of-view of the environment that extends away from the vehicle along a viewing direction of the second LIDAR. The second LIDAR has a second resolution. The vehicle also includes a controller configured to operate the vehicle based on the scans of the environment by the first LIDAR and the second LIDAR.
G01S 17/88 - Lidar systems, specially adapted for specific applications
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
84.
VEHICLE WITH MULTIPLE LIGHT DETECTION AND RANGING DEVICES (LIDARS)
A vehicle is provided that includes one or more wheels positioned at a bottom side of the vehicle. The vehicle also includes a first light detection and ranging device (LIDAR) positioned at a top side of the vehicle opposite to the bottom side. The first LIDAR is configured to scan an environment around the vehicle based on rotation of the first LIDAR about an axis. The first LIDAR has a first resolution. The vehicle also includes a second LIDAR configured to scan a field-of-view of the environment that extends away from the vehicle along a viewing direction of the second LIDAR. The second LIDAR has a second resolution. The vehicle also includes a controller configured to operate the vehicle based on the scans of the environment by the first LIDAR and the second LIDAR.
A vehicle is provided that includes one or more wheels positioned at a bottom side of the vehicle. The vehicle also includes a first light detection and ranging device (LIDAR) positioned at a top side of the vehicle opposite to the bottom side. The first LIDAR is configured to scan an environment around the vehicle based on rotation of the first LIDAR about an axis. The first LIDAR has a first resolution. The vehicle also includes a second LIDAR configured to scan a field-of-view of the environment that extends away from the vehicle along a viewing direction of the second LIDAR. The second LIDAR has a second resolution. The vehicle also includes a controller configured to operate the vehicle based on the scans of the environment by the first LIDAR and the second LIDAR.
G01S 17/87 - Combinations of systems using electromagnetic waves other than radio waves
G01S 17/931 - Lidar systems, specially adapted for specific applications for anti-collision purposes of land vehicles
B60W 40/02 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to ambient conditions
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
Aspects of the present disclosure relate to a vehicle 100 for maneuvering a passenger to a destination autonomously. The vehicle 100 includes one or more computing devices 110 and a set of user input buttons 612, 614 for communicating requests to stop the vehicle and to initiate a trip to the destination with the one or more computing devices. The set of user input buttons consisting essentially of a dual-purpose button and an emergency stopping button 612 different from the dual-purpose button 614 configured to stop the vehicle. The dual-purpose button 614 has a first purpose for communicating a request to initiate the trip to the destination and a second purpose for communicating a request to pull the vehicle over and stop the vehicle. The vehicle has no steering wheel and no user inputs for the steering, acceleration, and deceleration of the vehicle other than the set of user input buttons.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60R 16/02 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric
87.
DEVICES AND METHODS FOR AN ENERGY-ABSORBING END OF A VEHICLE
A vehicle is provided that includes a frame and a mount to couple a first end of an apparatus to the frame. The apparatus comprises a central region that includes a first energy- absorbing material. A first side of the central region is included in the first end of the apparatus coupled to the frame. The apparatus comprises a side region that includes a second energy- absorbing material. The side region is positioned along a second side of the upper region. The side region is configured to be positioned above a wheel of the vehicle.