In an embodiment, a navigation system for a host vehicle may include at least one processor comprising circuitry and a memory. The memory may include instructions that when executed by the circuitry cause the at least one processor to receive at least one image from a camera on a host vehicle, to analyze the at least one image to identify at least one object represented in the image, to generate a feature vector representative of the at least one object, to compare the generated feature vector to a plurality of feature vectors stored in a database and in response to a determination that the generated feature vector does not match an entry in the database, send the generated feature vector to a server, wherein the server is configured to generate an updated feature vector database in response to the generated feature vector sent by the host vehicle navigation system in combination with feature vectors received from a plurality of additional vehicles.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Systems and methods are provided for navigating a host vehicle. In one implementation, a system may include at least one processor configured to receive at least one image acquired by an image capture device, the at least one image being representative of an environment of the host vehicle; analyze the at least one image to identify at least one characteristic associated with the environment of the host vehicle; determine a navigational action for the host vehicle based on: the at least one characteristic associated with the environment of the host vehicle, and a steering limit corresponding to a maximum allowable lateral acceleration for the host vehicle; and cause one or more actuators associated with the host vehicle to implement the determined navigational action.
The invention relates to a method and system which determine certainty, e.g. an aleatoric certainty score, as well as an uncertainty score, e.g. an epistemic uncertainty, based on input data.The input data includes static and dynamic data. A trained behavioral model generates trajectory predictions and the certainty and uncertainty score. A vehicle-based function is controlled based in the certainty score and the uncertainty score, e.g. increase following distance or hand over to a human driver.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
4.
DETECTING AN OPEN DOOR USING A SPARSE REPRESENTATION
A computer-implemented method for navigating a host vehicle includes receiving an image frame from an image capture device, the image frame representing an environment of the host vehicle and including a representation of a target vehicle; analyzing the image frame to determine a sparse representation of a portion of the image frame; providing the sparse representation to an object detection network; receiving an identifier of a candidate region identified by the object detection network; based on the identifier, extracting the candidate region from the image frame; providing the candidate region to an open door detection network; determining a navigational action for the host vehicle in response to an indication from the open door detection network that the candidate region includes a representation of a door of the target vehicle in an open condition; and causing an actuator associated with the host vehicle to implement the navigational action.
Techniques are disclosed to enable an adaptive vehicle advanced driver assistance system (ADAS) unit, which is also referred to as a "smart" ADAS. The smart ADAS unit transmits vehicle ADAS messages, which are received and aggregated by a remote computing system. The remote computing system may optionally include, in the aggregated data set, supplemental data such as weather information, traffic data, etc. The remote computing system identifies, from the aggregated data set, ADAS alert events and their corresponding locations, and uses predetermined rule sets to identify potential ADAS alert configuration settings that may be updated by vehicles in a service range. The ADAS configuration messages provide each vehicle with instructions regarding if, when, and how the ADAS configuration settings should be adjusted, which may comprise the adjustment of ADAS alert sensitivity settings to dynamically adjust the manner in which ADAS alerts are issued per each ADAS alert event.
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
G05B 13/00 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
B60W 10/18 - Conjoint control of vehicle sub-units of different type or different function including control of braking systems
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
B60W 30/095 - Predicting travel path or likelihood of collision
G08G 1/017 - Detecting movement of traffic to be counted or controlled identifying vehicles
Systems and methods arc provided for generating a crowd-sourced map for use in vehicle navigation. In one implementation, a system may include at least one processor configured to receive drive information collected from vehicles that traversed a junction; aggregate the received drive information to determine positions of traffic lights and spline representations for drivable paths; input the determined positions and the spline representations to a trained model configured to generate a traffic light relevancy mapping indicating a traffic light relevancy for traffic light to drivable path pairs of the junction; input an observed vehicle behavior to the at least one trained model to generate an updated traffic light relevancy mapping; store in the crowd-sourced map the indicators of traffic light relevancy for the traffic light to drivable path pairs; and transmit the crowd-sourced map to a vehicle for use in navigating the road segment.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 20/54 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Techniques are disclosed for performing knowledge distillation in the context of machine learning model training. A fixed pool of "teacher" models are provided, which meet predefined conditions. A machine learning model is then trained to provide a "student," by applying a random selection of the teachers to unlabeled sample data, which accelerates the training process. The algorithm implemented for this purpose ensures that the trained student provides low error.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G06N 7/01 - Probabilistic graphical models, e.g. probabilistic networks
A navigation system for a host vehicle to receive from an image capture device associated with the host vehicle a captured image representative of an environment of the host vehicle, a first trained network, the first trained network being configured to generate a first output indicative of a state of the traffic light, a second trained network, the second trained network being configured to generate a second output indicative of a proposed navigational action for the host vehicle relative to the traffic light, to determine, based on both the first output from the first trained network and the second output from the second trained network a planned navigational action for the host vehicle and to cause the host vehicle to take the planned navigational action.
B60T 7/12 - Brake-action initiating means for initiation not subject to will of driver or passenger
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60R 1/00 - Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
A server-based system for generating a map for storing a turn signal activation location along a road segment may include at least one processor comprising circuitry and a memory. The memory may include instructions that when executed by the circuitry cause the at least one processor to receive drive information from each of a plurality of vehicles that traversed a road segment, wherein the drive information includes turn signal activation information indicating a detected change in state of a turn signal of at least one target vehicle and a location where the detected change in state of the turn signal of the target vehicle occurred.
B60Q 1/34 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating change of drive direction
G01C 21/36 - Input/output arrangements for on-board computers
G08G 1/01 - Detecting movement of traffic to be counted or controlled
B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Systems and methods are provided for training and using a model to predict image blockages. In one implementation, a system may comprise at least one processor. The at least one processor may be programmed to obtain a plurality of training images, each of the plurality of training images being associated with a blockage indicator representing a presence of a blockage or an absence of a blockage; analyze intensities of pixels located at corresponding pixel coordinates of the plurality of training images; and cause the model to undergo at least one training process based on the plurality of training images, the blockage indicator associated with each of the plurality of training images, and the analysis of the intensities of the pixels.
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
11.
APPLICATION OF MEAN TIME BETWEEN FAILURE (MTBF) MODELS FOR AUTONOMOUS VEHICLES
To receive authority certification for mass deployment of autonomous vehicles (AVs), manufacturers need to justify that their AVs operate safer than human drivers. This in turn creates the need to estimate and model the collision rate (failure rate) of an AV taking all possible errors and driving situations into account. In other words, there is the strong demand for comprehensive Mean Time between Failure (MTBF) models for AVs. The disclosure describes such a generic and scalable model that creates a link between errors in the perception system to vehicle-level failures (collisions). Using this model, requirements for the perception quality may then be derived based on the desired vehicle-level MTBF, or vice versa, to obtain an MTBF value given a certain mission profile and perception quality.
G06F 11/36 - Preventing errors by testing or debugging of software
G06F 30/20 - Design optimisation, verification or simulation
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
Systems and methods for navigating a host vehicle are disclosed. In one implementation, a system includes a processor configured to receive from a camera onboard the host vehicle a captured image representative of an environment of the host vehicle. The captured image is provided to a trained system. The trained system is configured to infer an output from the captured image a presence of a curved road segment in the captured image, wherein the curved road segment is associated with a road on which the host vehicle is traveling. The processor is configured to receive the output provided by the training system. The output includes at least one speed value for the host vehicle. The at least one speed value output from the trained system is based on a proximity of the host vehicle to the curved road segment and based on at least one characteristic of the curved road segment represented in the captured image. The processor is configured to cause the host vehicle to take at least one navigational action based on the determined at least one speed value.
Techniques are disclosed for improving the detection of objects having different relative angular velocities with respect to the vehicle cameras. The techniques function to selectively weight pixel exposure values to favor longer or shorter exposure times for certain pixels within the pixel array over others. A selective pixel exposure weighting system is disclosed that functions to weight the exposure values for pixels acquired within a pixel array based upon the position of the pixel within the pixel array and other factors such as the movement and/or orientation of the vehicle. The techniques advantageously enable an autonomous vehicle (AV) or advanced driver-assistance systems (ADAS) to make better use of existing cameras and eliminate motion blur and other artifacts.
H04N 23/743 - Bracketing, i.e. taking a series of images with varying exposure conditions
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
H04N 25/58 - Control of the dynamic range involving two or more exposures
B60R 1/20 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
The invention relates to vehicle, comprising a memory configured to store instructions; and processing circuitry that is part of an advanced driver-assistance system (ADAS) of the vehicle, the processing circuitry being configured to execute the instructions stored in the memory to: (502) determine a location of the vehicle; (504) identify a hotspot location that is associated with a location at which safety-based warnings that were previously issued by other vehicles meet a predefined criterion; (506) determine a safety driving model (SDM) rule associated with the safety-based warnings; and (508) generate one or more trigger signals based upon the SDM rule and the location of the vehicle with respect to the identified hotspot location, wherein the one or more trigger signals cause the ADAS to (510) adjust one or more SDM parameters of the SDM rule.
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
15.
STEREO-ASSIST NETWORK FOR DETERMINING AN OBJECT'S LOCATION
Systems and methods for navigating a host vehicle are disclosed. In one implementation, a system includes a processor configured to receive a first image acquired by a first camera and a second image acquired by a second camera onboard the host vehicle; identify a first representation of an object in the first image and a second representation of the object in the second image; input to a first trained model at least a portion of the first image; input to a second trained model at least a portion of the second image; receive the first signature encoding determined by the first trained model and the second signature encoding determined by the second trained model; input to a third trained model the first signature encoding and the second signature encoding; and receive an indicator of a location of the object determined by the third trained model.
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Systems and methods for navigating a host vehicle based on RADAR-camera fusion are disclosed. In one implementation, a system includes a processor configured to receive images acquired by a camera onboard the host vehicle; identify a representation of a target vehicle in one of the images; receive from a RADAR system an indicator of a range between the host vehicle and the target vehicle; based on analysis of the images, identify in the image a ground intersection point associated with the target vehicle and a road surface; and determine an elevation value for the road surface based on the indicator of the range between the host vehicle and the target vehicle, the determined ground intersection point, and an angle of inclination between an optical axis of the camera and a ray directed toward a location of the ground intersection point.
Techniques are disclosed for using augmented reality (AR) displays to convey graphical representations of different types of information to a user. The information may be obtained from various sources such as objects and/or features that are dynamically detected by a vehicle or user- generated content. The graphical representations may also be based upon information that may be obtained via the use of crowdsourced AV map data. The graphical representations may be presented in as an AR view in a medium (such as a vehicle windshield) that matches a field of view of a user, thus accurately blending holographic information with the physical objects in the scene.
Techniques are disclosed for reducing false positives for generating warnings to avoid potential collisions between a vehicle and vulnerable road users (VRUs). This is accomplished via an onboard vehicle safety system that uses crowdsourced map data to determine whether a vehicle is capable of performing a maneuver that results in a lateral shift of the vehicle (which may include a lane-shifting or turning maneuver) within a predetermined threshold time period. The ability for the vehicle to make the turning maneuver, among other driving scenarios, may be used to by the safety system to intelligently determine whether a warning or other action is needed to avoid a potential collision with a VRU. In this way, the occurrence and number of false warnings/interventions are minimized or at least reduced, leading to more attentive drivers and thereby improving VRU safety.
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G01C 21/00 - Navigation; Navigational instruments not provided for in groups
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Techniques are disclosed to implement a vehicle camera lens cleaning systems that selectively utilize different cleaning modes for the vehicle camera lenses based upon various conditions. The different cleaning modes may be used on a per vehicle camera basis to provide specific types of cleaning to each of the vehicle cameras. The cleaning modes may include the use of air and/or liquid cleaning, which may be provided to each of the vehicle cameras independently.
Systems and methods for identifying objects in an environment of a host vehicle are disclosed. In one implementation, a system includes a processor configured to receive images representative of the environment of the host vehicle; assign first pixel descriptor values to a plurality of pixels associated with a first image and second pixel descriptor values to a plurality of pixels associated with a second image; identify object representations in the first image and the second image based on at the first pixel descriptor values and the second pixel descriptor values, respectively; determine a first object descriptor and a second object descriptor based on the first pixel descriptor values and the second pixel descriptor values, respectively; and based on a comparison of the first object descriptor and the second object descriptor, output an indication that the object representations in the first image and the second image represent a common object.
A method includes: receiving input code that comprises a loop that operates on a first array of elements and a second array of elements, wherein during an iteration of the loop a first operation is performed on an element of the first array of elements, or a second operation is performed on an element of the second array of elements; generating a first compound operation that operates on a predetermined number of elements of the first array of elements, the first compound operation resulting in a first intermediate vector; generating a second compound operation that operates on the predetermined number of elements of the second array of elements, the second compound operation resulting in a second intermediate vector; interleaving the first intermediate vector and the second intermediate vector and storing the interleaved result in a temporary vector; and summing the interleaved result in the temporary vector using an order-preserving sum.
A host vehicle-based feature harvester is disclosed. In one implementation, the feature harvester includes memory and a processor configured to receive a plurality of images captured by a camera onboard the host vehicle, the plurality of images being representative of an environment of the host vehicle; analyze at least one image from the plurality of images to identify a representation of a lane mark; select at least one sample area of the representation of the lane mark, wherein the at least one sample area is associated with an image location of at least a portion of the representation of lane mark; determine a location identifier of the at least one sample area; determine a surface quality indicator associated with the at least one sample area; and cause transmission of the location identifier and the surface quality indicator to an entity remotely-located relative to the host vehicle.
B60R 1/00 - Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
B60G 17/0165 - Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or s the regulating means comprising electric or electronic elements characterised by their responsiveness, when the vehicle is travelling, to specific motion, a specific condition, or driver input to an external condition, e.g. rough road surface, side wind
A system for collecting and distributing navigation information relative to a road segment is disclosed. In one embodiment, the system includes at least one processor programmed to receive drive information collected from each of a plurality of vehicles that traversed the road segment, wherein the drive information received from each of the plurality of vehicles includes indicators of speed traveled by one of the plurality of vehicles during a drive traversing the road segment; determine, based on the indicators of speed included in the drive information received from each of the plurality of vehicles, at least one aggregated common speed profile for the road segment; store the at least one aggregated common speed profile in an autonomous vehicle road navigation model associated with the road segment; and distribute the autonomous vehicle road navigation model to one or more autonomous vehicles for use in navigating along the road segment.
A system for automatically mapping a road segment may include: at least one processor programmed to: receive, from at least one camera mounted on a vehicle, a plurality of images acquired as the vehicle traversed the road segment; convert each of the plurality of images to a corresponding top view image to provide a plurality of top view images; aggregate the plurality of top view images to provide an aggregated top view image of the road segment; analyze the aggregated top view image to identify at least one road feature associated with the road segment; automatically annotate the at least one road feature relative to the aggregated top view image; and output to at least one memory the aggregated top view image including the annotated at least one road feature.
System and techniques for test scenario verification, for a simulation of an autonomous vehicle safety action, are described. In an example, measuring performance of a test scenario used in testing an autonomous driving safety requirement includes: defining a test environment for a test scenario that tests compliance with a safety requirement including a minimum safe distance requirement; identifying test procedures to use in the test scenario that define actions for testing the minimum safe distance requirement; identifying test parameters to use with the identified test procedures, such as velocity, amount of braking, timing of braking, and rate of acceleration or deceleration; and creating the test scenario for use in an autonomous driving test simulator. Use of the test scenario includes applying the identified test procedures and the identified test parameters to identify a response of a test vehicle to the minimum safe distance requirement.
System and techniques for vehicle operation safety model (VOSM) grade measurement are described herein. A data set of parameter measurements-defined by the VOSM-of multiple vehicles are obtained. A statistical value is then derived from a portion of the parameter measurements. A measurement from a subject vehicle is obtained that corresponds to the portion of the parameter measurements from which the statistical value was derived. The measurement is then compared to the statistical value to produce a safety grade for the subject vehicle.
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
27.
SAFETY AND CRITICAL INFORMATION LOGGING MECHANISM FOR VEHICLES
Various aspects of methods, systems, and use cases for safety logging in a vehicle are described. In an example, an approach for data logging in a vehicle includes use of logging triggers, public and private data buckets, and defined data formats, for data provided during autonomous vehicle operation. Data logging operations may be triggered in response to safety conditions, such as detecting a dangerous situation from a failure of the vehicle to comply with safety criteria of a vehicle operational safety model. Data logging operations may include logging data in response to detection of the dangerous situation, including storage of a first portion of data in a public data store, and storage of a second portion of privacy-sensitive data in a private data store, where the data stored in the private data store is encrypted, and where access to the private data store is controlled.
B60W 30/08 - Predicting or avoiding probable or impending collision
B60T 8/171 - Detecting parameters used in the regulation; Measuring values used in the regulation
B60T 17/22 - Devices for monitoring or checking brake systems; Signal devices
B60T 8/172 - Determining control parameters used in the regulation, e.g. by calculations involving measured or detected parameters
B60T 8/32 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force responsive to a speed condition, e.g. acceleration or deceleration
28.
LIGHTWEIGHT IN-VEHICLE CRITICAL SCENARIO EXTRACTION SYSTEM
Various aspects of methods, systems, and use cases for critical scenario identification and extraction from vehicle operations are described. In an example, an approach for lightweight analysis and detection includes capturing data from sensors associated with (e.g., located within, or integrated into) a vehicle, detecting the occurrence of a critical scenario, extracting data from the sensors in response to detecting the occurrence of the critical scenario, and outputting the extracted data. The critical scenario may be specifically detected based on a comparison of the operation of the vehicle to at least one requirement specified by a vehicle operation safety model. Reconstruction and further data processing may be performed on the extracted data, such as with the creation of a simulation from extracted data that is communicated to a remote service.
G07C 5/08 - Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle, or waiting time
System and techniques for vehicle operation safety model (VOSM) compliance measurement are described herein. A subject vehicle is tested in a vehicle following scenario against VOSM parameter compliance. The test measures the subject vehicle activity during phases of the following scenario in which a lead vehicle slows and produces log data and calculations that form the basis of a VOSM compliance measurement.
Systems and methods evaluate navigation system capabilities. In one implementation, at least one processing device is programmed to acquire characteristics of one or more sensors included in the host vehicle; establish a testing domain, wherein the testing domain includes at least one mapped representation of a geographic region; and simulate operation of the one or more sensors relative to the testing domain. Based on the simulated operation of the one or more sensors, the at least one processing device may determine whether one or more regions exist within the geographic region where outputs of the one or more sensors are insufficient for ensuring that each navigational action implemented by the navigation system of the host vehicle will not result in an accident for which the host vehicle is at fault.
Techniques are disclosed for the generation of automatic software tests for complex software systems, such as operating systems (OS) and/or systems that may be implemented as part of an autonomous vehicle (AV) or advanced driving assistance system (ADAS). The technique generates tests using a tool, such as a stressor, which stresses a particular system under test in multiple ways. For every run of the stressor, the functions of the system that are invoked during the test are captured. A check is then performed to determine if this set of functions corresponds to one of the test scenarios for which testing is desired. If the set of functions that were invoked matches the set of functions that defines the test, then the configuration of the stressor is stored, and this stressor configuration is considered as the test for a particular scenario.
A system for correlating drive information from multiple road segments is disclosed. In one embodiment, the system includes memory and a processor configured to receive drive information from vehicles that traversed a first road segment and vehicles that traversed a second road segment. The processor is configured to correlate the drive information from the vehicles to provide a first road model segment representative of the first road segment and a second road model segment representative of the second road segment. The processor correlates the first road model segment with the second road model segment to provide a correlated road segment model if a drivable distance between a first point associated with the first road segment and a second point associated with the second road segment is less than or equal to a predetermined distance threshold, and stores the correlated road segment model as part of a sparse navigational map.
A method for retrieving neural network coefficients may include executing neural network operations and storing, in at least one data memory, one or more intermediate results of the neural network operations. The method may also include retrieving, in an iterative manner, subsets of neural network coefficients related to a particular layer of a neural network associated with at least one of the neural network processors. Different ones of the neural network processors may use at least one of the subsets of the neural network coefficients. The retrieving the subsets of neural network coefficients may include caching the subsets in coefficient cache memory. At least some of the subsets may be cached in the coefficient cache memory for up to a first duration, and at least some of the intermediate results may be stored in the at least one data memory for a duration that exceeds the first duration.
A method for decompressing data may include receiving a first sequence of bits and performing a plurality of iterations. Each of the plurality of iterations may include scanning bits of the first sequence, starting from a starting point, to search for at least one of a variable length codeword or a bypass indicator, the starting point being either a starting point of the first sequence or a starting point defined in a previous iteration. The method also include, for at least one of the plurality of iterations, when a bypass indicator is found, outputting a neural network coefficient related value (NNCRV) that is non-compressed and follows the bypass indicator, and defining a starting point that follows the NNCRV as a starting point for a next iteration.
A method for executing atomic commands may include receiving, by an interface of an atomic command execution unit and from a plurality of requestors, a plurality of memory mapped atomic commands. The method may also include executing the plurality of memory mapped atomic commands to provide output values. The method may further include storing, in a first memory unit of the atomic command execution unit, requestor specific information. Different entries of a plurality of entries of the first memory unit may be allocated to different requestors of the plurality of requestors. The method may also include storing, in a second memory unit of the atomic command execution unit, the output values of the plurality of memory mapped atomic commands, and outputting, by the interface and to at least one of the plurality of requestors, at least one indication indicating a completion of at least one of the atomic commands.
A system for navigating a host vehicle may include memory and at least one processor configured to receive a plurality of images acquired by a camera onboard the host vehicle; generate, based on analysis of the plurality of images, a road geometry model for a segment of road forward of the host vehicle; determine, based on analysis of at least one of the plurality of images, one or more indicators of an orientation of the host vehicle; and generate, based on the one or more indicators of orientation of the host vehicle and the road geometry model for the segment of road forward of the host vehicle, one or more output signals configured to cause a change in a pointing direction of a movable headlight onboard the host vehicle.
The present disclosure relates to systems (100) and methods for identifying a wheel slip condition. In one implementation, a processor (110) may receive a plurality of image frames acquired by an image capture device (120) of a vehicle (200). The processor (110) may also determine based on analysis of the images one or more indicators of a motion of the vehicle (200); and determine a predicted wheel rotation corresponding to the motion of the vehicle (200). The processor (110) may further receive sensor outputs indicative of measured wheel rotation associated with a wheel; and compare the predicted wheel rotation to the measured wheel rotation for the wheel. The processor (110) may additionally detect a wheel slip condition wheel based on a discrepancy between the predicted wheel rotation and the measured wheel rotation; and initiate at least one navigational action in response to the detected wheel slip condition associated with the wheel.
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01C 21/36 - Input/output arrangements for on-board computers
G01C 22/00 - Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers or using pedometers
The present disclosure relates to systems and methods for calibrating a multi-camera navigation system for a vehicle. In one implementation, at least one processing device may receive first and second image frames acquired by a first camera onboard the vehicle; receive first and second image frames acquired by a second camera onboard the vehicle; determine a first ego-motion signal, including an indication of a change in position of the first camera relative to capture times associated with the first and second image frames acquired by the first camera; determine a second ego-motion signal, including an indication of a change in position of the second camera relative to capture times associated with the first and second image frames acquired by the second camera; and determine a relative orientation between the first camera and the second camera based on the first ego-motion signal and the second ego-motion signal.
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
A computer-implemented method for navigating a host vehicle may include receiving an image frame acquired by an image capture device associated with the host vehicle; identifying in the image frame a representation of a target vehicle; determining an orientation indicator associated with the target vehicle; based on the determined orientation indicator, identifying a candidate region of the acquired image frame where a representation of a vehicle door of the target vehicle is expected in an open door condition; extracting the candidate region from the image frame; providing the candidate region to an open door detection network; determining a navigational action for the host vehicle in response to an indication from the open door detection network that the candidate region includes a representation of a door of the target vehicle in an open condition; and causing an actuator associated with the host vehicle to implement the navigational action.
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01C 21/00 - Navigation; Navigational instruments not provided for in groups
G06K 9/46 - Extraction of features or characteristics of the image
The present disclosure relates to navigation and to systems and methods for using a dual sensor readout channel to allow for frequency detection. In one implementation, at least one processing device may receive a plurality of images acquired by a camera onboard a host vehicle, wherein the plurality of images are received via a first channel and via a second channel, and wherein the first channel is associated with a first frame capture rate, and the second channel is associated with a second frame capture rate different from the first frame capture rate. The processing device may use images received via the first channel to detect flickering and non-flickering light sources in an environment of the host vehicle; and provide, based on images received via the second channel, images for showing on one or more human-viewable displays.
Techniques are disclosed to implement an on-vehicle computer system that detects traffic safety mirrors and analyzes the reflected image(s) in the mirror to detect otherwise unseen approaching vehicles. A vehicle detected in the mirror in this manner may be matched to a vehicle that is viewed directly and detected via the vehicle's cameras once the vehicle becomes visible. The vehicle's control system may further facilitate control of the vehicle by triggering various control-related actions, which may include modifying the state of the vehicle in one or more ways (e.g. the speed, acceleration, trajectory, etc.) to decrease the risk posed by the detected vehicle, which may include appropriately slowing down or stopping the vehicle to wait for the other (detected) vehicle to pass.
Systems and methods are provided for vehicle navigation. In one implementation, a system for navigating a vehicle may include at least one processor configured to receive a first image frame; detect in the first image frame a representation of a traffic light and determine a color state associated with lamps included on the traffic light. The at least one processor may receive an additional image frame includes a representation of the at least one traffic light; and determine, based on a comparison of the first image frame and the additional image frame, whether the at least one traffic light includes a blinking lamp. If the at least one traffic light includes a blinking lamp, the processor may cause the vehicle to implement a navigational action relative the traffic light in accordance with the determination and also based on a detected color state for the blinking lamp.
A method for evaluating flow control integrity, the method may include detecting that a flow reached a flow change command or is about to reach the flow change command, wherein the flow change command belongs to a current software environment, wherein the current software environment is identified by a current environment identifier; retrieving a shadow environment identifier that is a last environment identifier stored in a shadow stack, wherein the shadow environment identifier identifies a software environment having an entry region that was a last entry region accessed by the flow, wherein the entry region comprises a shadow stack update instruction that was executed by the flow; comparing the shadow environment identifier to the current environment identifier; and detecting a potential attack when the shadow environment identifier differs from the current environment identifier.
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
G06F 9/30 - Arrangements for executing machine instructions, e.g. instruction decode
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
G06F 21/55 - Detecting local intrusion or implementing counter-measures
A system may include a processor programmed to access a map and receive an output provided by a vehicle sensor. The processor may also be programmed to localize a vehicle relative to the map based on analysis of the output from the sensor, and determine an electronic horizon for the vehicle based on the localization of the vehicle relative to the map. The processor may further be programmed to generate a navigation information packet including information associated with the determined electronic horizon. The navigation information packet may include a header portion and a variable-sized payload portion. The header portion may specify what information is included in the variable-sized payload portion. The processor may also be programmed to output the generated navigation information packet to one or more navigation system processors configured to cause the vehicle to execute a navigational maneuver based on the information included in the navigation information packet.
A mechanism for evaluating a floating-point accuracy of a vehicle driving compatible compiler includes testing code compiled by a vehicle driving compatible compiler with code compiled by a testing environment compatible compiler, executing the vehicle driving compatible compiled code involves executing addition type floating points operations to provide a first floating point result, executing the testing environment compatible compiled code to perform addition type floating points operations to provide a second floating point result that corresponds to the first floating point result, comparing the first floating point result to the second floating point results to provide a comparison result, and determining the floating-point accuracy of vehicle driving compatible compiler based on the comparison result.
A system for navigating a vehicle may include a processor programmed to receive an output provided by a vehicle sensor, and determine a navigational maneuver for the vehicle along a road segment based on the output provided by the vehicle sensor. The processor may also be programmed to determine a yaw rate command and a speed command for implementing the navigational maneuver. The processor may also be programmed to determine a first vehicle steering angle based on the yaw rate and speed commands using a first control subsystem, and determine a second vehicle steering angle based on the yaw rate and speed commands using a second control subsystem. The processor may further be programmed to determine an overall steering command for the vehicle based on a combination of the first and second steering angles, and cause an actuator associated with the vehicle to implement the overall steering command.
A method for executing an atomic compare and exchange operation, the method may include processing a compare command and a conditional exchange command while considering hardware failures.
A system for vehicle navigation may include a processor including a circuitry and a memory. The memory may include instructions that when executed by the circuitry cause the processor to receive navigational information associated with the vehicle including an indicator of a location of the vehicle, and determine target navigational map segments to retrieve from a map database. The map database may include stored navigational map segments each corresponding to a real-world area. The determination of the target navigational map segments may be based on the indicator of vehicle location and on map segment connectivity information associated with the stored navigational map segments. The instructions may also cause the processor to initiate downloading of the target navigational map segments from the map database, and cause the vehicle to navigate along a target trajectory included in one or more of the target navigational map segments downloaded from the map database.
A method for collecting data from a group of entitled members. The method may include receiving, by a collection unit, a message and a message signature; validating, by the collection unit, whether the message was received from any of the entitled members of the group, without identifying the entitled member that sent the message; wherein the validating comprises applying a second plurality of mathematical operations on first group secrets, second group secrets and a first part of the message signature; and rejecting, by the collection unit, the message when validating that the message was not received from any entitled member of the group.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
50.
NAVIGATION SYSTEMS AND METHODS FOR DETERMINING OBJECT DIMENSIONS
Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a host vehicle may comprise at least one processor. The processor may be programmed to receive from a camera onboard the host vehicle a plurality of captured images representative of an environment of the host vehicle. The processor may provide each of the plurality of captured images to a target object analysis module including at least one trained model configured to generate an output for each of the plurality of captured image. The processor may receive from the target object analysis module the generated output. The processor may further determine at least one navigational action to be taken by the host vehicle based on the output generated by the target object analysis module. The processor may cause the at least one navigational action to be taken by the host vehicle.
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01C 21/36 - Input/output arrangements for on-board computers
G05D 1/02 - Control of position or course in two dimensions
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
51.
VEHICLE NAVIGATION WITH PEDESTRIANS AND DETERMINING VEHICLE FREE SPACE
Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a host vehicle may include at least one processor programmed to receive from a camera onboard the host vehicle at least one captured image representative of an environment of the vehicle; detect a pedestrian represented in the at least one captured image; analyze the at least one captured image to determine an indicator of angular rotation and an indicator of pitch angle associated with a head of the pedestrian represented in the at least one captured image; and cause at least one navigational action by the host vehicle based on the indicator of angular rotation and the indicator of pitch angle associated with the head of a pedestrian.
A navigation system for a host vehicle may include a processor programmed to determine at least one indicator of ego motion of the host vehicle. A processor may be also programmed to receive, from a LIDAR system, a first point cloud including a first representation of at least a portion of an object and a second point cloud including a second representation of the at least a portion of the object. The processor may further be programmed to determine a velocity of the object based on the at least one indicator of ego motion of the host vehicle, and based on a comparison of the first point cloud, including the first representation of the at least a portion of the object, and the second point cloud, including the second representation of the at least a portion of the object.
Systems and methods are provided for vehicle navigation. In one implementation, a host vehicle-based sparse map feature harvester system may include at least one processor programmed to receive a plurality of images captured by a camera onboard the host vehicle as the host vehicle travels along a road segment in a first direction, wherein the plurality of images are representative of an environment of the host vehicle; detect one or more semantic features represented in one or more of the plurality of images, the one or more semantic features each being associated with a predetermined object type classification; identify at least one position descriptor associated with each of the detected one or more semantic features; identify three-dimensional feature points associated with one or more detected objects represented in at least one of the plurality of images; receive position information, for each of the plurality of images, wherein the position information is indicative of a position of the camera when each of the plurality of images was captured; and cause transmission of drive information for the road segment to an entity remotely-located relative to the host vehicle, wherein the drive information includes the identified at least one position descriptor associated with each of the detected one or more semantic features, the identified three-dimensional feature points, and the position information.
Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a host vehicle may comprise at least one processor. The processor may be programmed to receive from a first camera at least a first captured image representative of an environment of the host vehicle. The processor may be programmed to receive from a second camera at least a second captured image representative of the environment of the host vehicle. Both the first captured image and the second image includes a representation of the traffic light, and wherein the second camera is configured to operate in a primary mode where at least one operational parameter of the second camera is tuned to detect at least one feature of the traffic light. The processor may be further programmed cause at least one navigational action by the vehicle based on analysis of the representation of the traffic light.
One or more processors may be configured to determine one or more prospective routes of an ego vehicle being at least partially controlled by a human driver; receive first sensor data, representing one or more attributes of a second vehicle; determine a danger probability of the one or more prospective routes of the first vehicle using the at least the one or more attributes of the second vehicle from the first sensor data; and if each of the one or more prospective routes of the first vehicle has a danger probability outside of a predetermined range, send a signal representing a safety intervention. Whenever a safety intervention signal is sent, the one or more processors may be configured to increment or decrement a counter.
B60K 31/00 - Vehicle fittings, acting on a single sub-unit only, for automatically controlling vehicle speed, i.e. preventing speed from exceeding an arbitrarily established velocity or maintaining speed at a particular velocity, as selected by the vehicle operat
A safety system (200) for a vehicle (100) is provided. The safety system (200) may include one or more processors (102). The one or more processors (102) may be configured to control a vehicle (100) to operate in accordance with the predefined stored driving model parameters, to detect vehicle operation data during the operation of the vehicle (100), to determine whether to change predefined driving model parameters based on the detected vehicle operation data and the driving model parameters, to change the driving model parameters to changed driving model parameters, and to control the vehicle (100) to operate in accordance with the changed driving model parameters.
B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
Provided is a device and a method for route planning. The route planning device (100) may include a data interface (128) coupled to a road and traffic data source (160); a user interface (170) configured to display a map and receive a route planning request from a user, the route planning request including a line of interest on the map; a processor (110) coupled to the data interface (128) and the user interface (170). The processor (110) may be configured to identify the line of interest in response to the route planning request; acquire, via the data interface (128), road and traffic information associated with the line of interest from the road and traffic data source (160); and calculate, based on the acquired road and traffic information, a navigation route that matches or corresponds to the line of interest and meets or satisfies predefined road and traffic constraints.
Systems and methods are provided for vehicle navigation. In one implementation, a system for a host vehicle includes at least one processor programmed to determine, based on an output of at least one sensor of the host vehicle, one or more target dynamics of a target vehicle; determine, based on one or more host dynamics of the host vehicle and the target dynamics, a time to collision; determine, based on the time to collision and the host dynamics, a host deceleration for the host vehicle to avoid the collision; determine, based on the time to collision and the target dynamics, a target deceleration for the target vehicle to avoid the collision; determine, based on the host deceleration and the target deceleration, a host deceleration threshold; and determine, based on a speed of the host vehicle and the host deceleration threshold, to brake the host vehicle.
B60T 7/22 - Brake-action initiating means for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle
B60W 30/095 - Predicting travel path or likelihood of collision
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
A system for determining safety of a road segment may include at least one processor programmed to receive, from a first vehicle, first navigation information associated with the road segment. The first navigation information may include information collected by a first sensor of the first vehicle from an environment of the first vehicle. The at least one processor may also be programmed to receive, from a second vehicle, second navigation information associated with the road segment. The second navigation information may include information collected by a second sensor of the second vehicle from an environment of the second vehicle. The at least one processor may further be programmed to determine, based on the first navigation information and the second navigation information, a score representative of the safety of the road segment, and transmit, to a third vehicle, the score representative of the safety of the road segment.
G01C 21/00 - Navigation; Navigational instruments not provided for in groups
G08G 1/01 - Detecting movement of traffic to be counted or controlled
G01C 21/28 - Navigation; Navigational instruments not provided for in groups specially adapted for navigation in a road network with correlation of data from several navigational instruments
A system for managing storage space for a computer may include at least one processor programmed to determine a maximum data space for a computing task. The at least one processor may also be programmed to create a file having a maximum size equal to or greater than the maximum data space. The at least one processor may further be programmed to create a virtual device linked to the file and mount a filesystem inside the virtual device. The at least one processor may also be programmed to mount the virtual device. The at least one processor may further be programmed to determine that the computing task is completed. The at least one processor may further be programmed to unmount the virtual device.
Techniques are disclosed for the implementation of machine learning model training utilities to generate models for advanced driving assistance system (ADAS), driving assistance, and/or automated vehicle (AV) systems. The techniques described herein may be implemented in conjunction with the utilization of open source and cloud-based machine learning training utilities to generate machine learning trained models. One example of such an open source solution includes TensorFlow, which is a free and open-source software library for dataflow and differentiable programming across a range of tasks. TensorFlow may be used in conjunction with many different types of machine learning utilities, such as Amazon's cloud-based SageMaker utility for instance, which is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale.
Systems and methods are provided for vehicle navigation. In one implementation, a system for identifying objects in an environment of a host vehicle may comprise at least one processor. The processor may be programmed to receive, from an image capture device, an image representative of the environment of the host vehicle and analyze the image to detect objects represented in the image. The processor may compare position information for the objects to location information for mapped objects represented a navigational map segment to determine a first estimated position of the host vehicle. The processor may further provide the image and an identifier associated with the map segment to a trained system and receive a second estimated position of the host vehicle. The processor may determine a navigational action based on a combination of the first and second estimated positions and cause the host vehicle to implement the determined navigational action.
Systems and methods are provided for predicting blind spot incursions for a host vehicle. In one implementation, a navigation system for a host vehicle may comprise a processor. The processor may be programmed to receive, from an image capture device located on a rear of the host vehicle, at least one image representative of an environment of the host vehicle. The processor may be programmed to analyze the at least one image to identify an object in the environment of the host vehicle and to determine kinematic information associated with the object. The processor may further be programmed to predict, based on the kinematic information, that the object will travel in a region outside of a field of view of the image capture device and perform a control action based on the prediction.
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
G08G 1/01 - Detecting movement of traffic to be counted or controlled
Systems and methods are provided for monitoring traffic lane congestion. In one implementation, a system for monitoring traffic lane congestion may include at least one processor. The processor may be programmed to receive, from an image capture device of a host vehicle, at least one image representative of an environment of the host vehicle. The processor may be programed to analyze the at least one image to identify a traffic lane in the environment of the host vehicle and analyze the at least one image to determine a traffic congestion condition of the traffic lane. The processor may further be programmed to transmit traffic information indicative of the traffic congestion condition.
Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a vehicle may comprise at least one processor. The at least one processor may be programmed to obtain a route from a location of the vehicle to a destination; receive signal information indicative of a quality characteristic of a signal associated with at least one location along the route; determine a quality parameter for at least one operational characteristic associated with a communication system; and determine at least one modification to the route for the vehicle based on the signal information and the quality parameter.
Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a vehicle may comprise at least one processor. The at least one processor may be programmed to receive, from at least one sensor of the vehicle, information captured from an environment of the vehicle and determine, based on the information, a first position of the vehicle relative to a road navigation model. The at least one processor may further determine, based on at least one signal received from a satellite, a second position of the vehicle and determine, based on a comparison of the first position and the second position, error information associated with the second position. The at least one processor may cause a transmission of the error information to a server.
G01C 21/28 - Navigation; Navigational instruments not provided for in groups specially adapted for navigation in a road network with correlation of data from several navigational instruments
G01S 19/07 - Cooperating elements; Interaction or communication between different cooperating elements or between cooperating elements and receivers providing data for correcting measured positioning data, e.g. DGPS [differential GPS] or ionosphere corrections
G01S 19/41 - Differential correction, e.g. DGPS [differential GPS]
G01S 19/45 - Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
G01S 19/48 - Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
67.
REDUCING STORED PARAMETERS FOR A NAVIGATION SYSTEM
Systems and methods are provided for vehicle navigation. In one implementation, at least one processor may receive, from a camera of the host vehicle, at least one image captured from an environment of the host vehicle and identify at least one object represented in the at least one image. The at least one object may be identified based on an output of a neural network configured to process the at least one image using a first plurality of kernels and a second plurality of kernels and at least one of the second plurality of kernels may be symmetric with respect to at least one of the first plurality of kernels. The processor may further determine a navigational action for the host vehicle based on the identified at least one object represented in the at least one image and cause the host vehicle to implement the determined navigational action.
A camera module assembly can include a chimney and a lens assembly. The chimney can include a distal portion having a substantially spherical profile. The lens assembly can include a lens barrel, an optical device, and a flange extending radially from the lens barrel, where the flange can be securable to the distal portion of the chimney.
A61B 1/05 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
G02B 23/24 - Instruments for viewing the inside of hollow bodies, e.g. fibrescopes
G02B 13/00 - Optical objectives specially designed for the purposes specified below
G02B 13/16 - Optical objectives specially designed for the purposes specified below for use in conjunction with image converters or intensifiers
Systems and methods are provided for vehicle navigation. In one implementation, at least one processor may receive, from a camera, at least one captured image representative of features in an environment of the vehicle. The processor may identify an intersection and a pedestrian in a vicinity of the intersection represented in the image. The processor may determine a navigational action for the vehicle relative to the intersection based on routing information for the vehicle; and determine a predicted path for the vehicle relative to the intersection based on the determined navigational action and a predicted path for the pedestrian based on analysis of the image. The processor may further determine whether the vehicle is projected to collide with the pedestrian based on the projected paths; and, in response, cause a system associated with the vehicle to implement a collision mitigation action.
Systems and methods are provided for vehicle navigation. In one implementation, at least one processing device may receive, from a camera of the host vehicle, at least one captured image representative of an environment of the host vehicle. The processing device may analyze one or more pixels of the at least one captured image to determine whether the one or more pixels represent at least a portion of a target vehicle. For pixels determined to represent at least a portion of the target vehicle, the processing device may determine one or more estimated distance values from the one or more pixels to at least one edge of a face of the target vehicle; and generate, based on the analysis of the one or more pixels, including the determined one or more distance values associated with the one or more pixels, at least a portion of a boundary relative to the target vehicle.
Systems and methods are provided for vehicle navigation. The systems and methods may detect traffic lights. For example, one or more traffic lights may be detected using detection-redundant camera detection paths, a fusion of information from a traffic light transmitter and one or more cameras, based on contrast enhancement for night images, and based on low resolution traffic light candidate identification followed by high resolution candidate analysis. Additionally, the systems and methods may navigation based on a worst time to red estimation.
Systems and methods are provided for vehicle navigation. In one implementation, at least one processor may receive, from a camera of a vehicle, at least one image captured from an environment of the vehicle. The processor may analyze the at least one image to identify a road topology feature in the environment of the vehicle represented in the at least one image and at least one point associated with the at least one image. Based on the identified road topology feature, the processor may determine an estimated path in the environment of the vehicle associated with the at least one point. The processor may further cause the vehicle to implement a navigational action based on the estimated path.
Various systems and methods for modeling a scene. A device for modeling a scene includes a hardware interface to obtain a time-ordered sequence of images representative of a scene, the time-ordered sequence including a plurality of images, one of the sequence of images being a current image, the scene captured by a monocular imaging system; and processing circuitry to: provide a data set to an artificial neural network (ANN) to produce a three-dimensional structure of the scene, the data set including: a portion of the sequence of images, the portion of the sequence of images including the current image; and motion of a sensor that captured the sequence of images; and model the scene using the three-dimensional structure of the scene, wherein the three-dimensional structure is determined for both moving and fixed objects in the scene.
A system, computer readable medium and a method that may include performing multiple iterations of: determining, by each active initiator of the multiple initiators, a number of pending access requests generated by the active initiator, wherein each access request is a request to access a shared resource out of the shared resources; determining, by each active initiator, a priority level to be assigned to all pending access requests generated by the active initiator, wherein the determining is based on the number of pending access requests generated by the active initiator, a number of active initiators out of the multiple initiators, and a number of access requests serviceable by the shared resource; for each active initiator, informing an arbitration hardware of a network on chip about the priority level to be assigned to all pending access requests generated by the active initiator; and managing access to the shared resources, by the arbitration hardware, based on the priority level to be assigned to all pending access requests generated by each active initiator.
The present disclosure relates to systems and methods for aligning navigation information from a plurality of vehicles. In one implementation, at least one processing device may receive first navigational information from a first vehicle and second navigational information from a second vehicle. The first and second navigational information may be associated with a common road segment. The processor may divide the common road segment into a first road section and a second road section that join at a common point. The processor may then align the first and second navigational information by rotating at least a portion of the first navigational information or the second navigational information relative to the common point. The processor may store the aligned navigational information in association with the common road segment and send the aligned navigational information to vehicles for use in navigating along the common road segment.
System and method to send map data to a vehicle based on the potential travel envelope of the vehicle. The shape of the envelope is determined based on the speed, location and direction of travel of the vehicle
Systems and methods are provided for vehicle navigation. In one implementation, a navigation system for a host vehicle includes at least one processor programmed to: receive, from a camera of the host vehicle, one or more images captured from an environment of the host vehicle; analyze the one or more images to detect an indicator of an intersection; determine, based on output received from at least one sensor of the host vehicle, a stopping location of the host vehicle relative to the detected intersection; analyze the one or more images to determine an indicator of whether one or more other vehicles are in front of the host vehicle; and send the stopping location of the host vehicle and the indicator of whether one or more other vehicles are in front of the host vehicle to a server for use in updating a road navigation model.
Systems and methods are provided for vehicle navigation. In one implementation, at least one processor may be programmed to receive, from a camera, a captured image representative of features in an environment of the vehicle. The processor may generate a warped image based on the received captured image, which may simulate a view of the features in the environment of the vehicle from a simulated viewpoint elevated relative to an actual position of the camera. The processor may further identify a road feature represented in the warped image, which may be transformed in one or more respects relative to a representation of the road feature in the captured image. The processor may then determine a navigational action for the vehicle based on the identified feature represented in the warped image and cause at least one actuator system of the vehicle to implement the determined navigational action.
Systems and methods are disclosed for aggregating informational reports. In one implementation, at least one processor may be programmed to receive an informational vehicle report identifying a detected event; store the report in a database in association with a first cell; query a second cell within a predetermined distance of the first cell; and determine whether the second cell is associated with the detected event. When the second cell is associated with the detected event the processor may aggregate information from the first and second cells to provide an aggregated cluster and generate an event report based on the aggregated cluster. When the second cell is not associated with an information cluster associated with the detected event, the processor may generate the event report based on the stored informational vehicle report. The processor may then transmit the event report to one or more vehicles.
Systems and methods are disclosed for mapping lanes for use in vehicle navigation. In one implementation, at least one processing device may be programmed to receive navigational information from a first vehicle and a second vehicle that have navigated along a road segment including a lane split feature; receive at least one image associated with the road segment; determine, from the first navigational information, a first actual trajectory of the first vehicle and a second actual trajectory of the second vehicle; determine a divergence between the first actual trajectory and the second actual trajectory; determine, based on analysis of the at least one image, that the divergence between the first actual trajectory and the second actual trajectory is indicative of the lane split feature; and update a vehicle road navigation model to include a first target trajectory and a second target trajectory that branches from the first target trajectory after the lane split feature.
Systems and methods described herein can be used to improve camera modules (e.g., camera components), particularly when the camera lens depth of focus is very small. Improvement of a camera with a small depth of focus is particularly important in various applications, such as in cameras used in autonomous navigation (e.g., advanced driver assistance systems (ADAS) and autonomous vehicle (AV) systems).
The present subject matter provides various technical solutions to technical problems facing advanced driver assistance systems (ADAS) and autonomous vehicle (AV) systems. In particular, disclosed embodiments provide systems and methods that may use cameras and other sensors to detect objects and events and identify them as predefined signal classifiers, such as detecting and identifying a red stoplight. These signal classifiers are used within ADAS and AV systems to control the vehicle or alert a vehicle operator based on the type of signal. These ADAS and AV systems may provide full vehicle operation without requiring human input. The embodiments disclosed herein provide systems and methods that can be used as part of or in combination with ADAS and AV systems.
Systems and methods are disclosed for navigating a host vehicle. In one implementation, at least one processing device may be programmed to receive an image representative of an environment of the host vehicle, determine a planned navigational action for the host vehicle, analyze the image to identify a target vehicle with a direction of travel toward the host vehicle, and determine a next-state distance between the host vehicle and the target vehicle that would result if the planned navigational action was taken. The at least one processing device may further determine a stopping distance for the host vehicle based on a braking rate, a maximum acceleration capability, and a current speed of the host vehicle, determine a stopping distance for the target vehicle based on a braking rate, a maximum acceleration capability, and a current speed of the target vehicle, and implement the planned navigational action if the determined next-state distance is greater than a sum of the stopping distances for the host vehicle and the target vehicle.
B60W 30/095 - Predicting travel path or likelihood of collision
B60W 10/06 - Conjoint control of vehicle sub-units of different type or different function including control of propulsion units including control of combustion engines
B60W 10/18 - Conjoint control of vehicle sub-units of different type or different function including control of braking systems
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
B60W 50/12 - Limiting control by the driver depending on vehicle state, e.g. interlocking means for the control input for preventing unsafe operation
B60W 50/08 - Interaction between the driver and the control system
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
B60T 8/86 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force responsive to a speed condition, e.g. acceleration or deceleration wherein the brakes are automatically applied in accordance with a speed condition and having means for overriding the automatic braking device when a skid condition occurs
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
84.
ERROR CORRECTION CODING IN A DYNAMIC MEMORY MODULE
A method for error correction and a system. The method may include opening a selected row of a memory bank out of multiple memory banks of a dynamic memory module; and while the selected row is open: (i) receiving selected data sub-blocks that are targeted to be written to the selected row, (ii) calculating selected error correction code sub-blocks that are related to the selected data sub-blocks, (iii) caching the selected error correction code sub-blocks in a cache memory that differs from the dynamic memory module and (iv) writing, to the selected row, the selected error correction code sub-blocks.
A method for accessing a dynamic memory module, the method may include (i) receiving, by a memory controller, a set of access requests for accessing the dynamic memory module; (ii) converting the access requests to a set of commands, wherein the set of commands comprise (a) a first sub-set of commands that are related to a first group of memory banks, and (b) a second sub-set of commands that are related to a second group of memory banks; (iii) scheduling, by a scheduler of the memory controller, an execution of the first sub-set; (iv) scheduling an execution of the second sub-set to be interleaved with the execution of the first sub-set; and (v) executing the set of commands according to the two scheduling.
A camera for use in automotive applications includes a lens system having a modulation transfer function (MTF) tuned to process light in a spectral range from red to green with greater resolution than light in a spectral range from blue to violet. The camera also includes an imager having pixel sensors arranged in a matrix and a color filter matrix including multiple color filter elements, each corresponding to a pixel sensor of the imager. The color filter matrix includes red filter elements and yellow filter elements and the number of yellow filter elements is greater than the number of red filter elements.
B60R 1/00 - Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
87.
SECURE SYSTEM THAT INCLUDES DRIVING RELATED SYSTEMS
A driving related system that may include an integrated circuit (IC) that may include IC conductors and test unit; a PCB that may include PCB conductors; intermediate conductors for coupling the IC conductors to the PCB conductors; wherein the test unit is configured to: electrically test a continuity of a first conducive path that comprises a first group of intermediate conductors, a first group of IC conductors and a first group of PCB conductors; and generate a continuity fault indication when detecting a discontinuity of the first conductive path; and wherein the driving related system is configured to perform a safety measure, in response to a generation of one or more continuity fault indications.
B60L 3/00 - Electric devices on electrically-propelled vehicles for safety purposes; Monitoring operating variables, e.g. speed, deceleration or energy consumption
B60L 3/04 - Cutting-off the power supply under fault conditions
B60L 15/10 - Methods, circuits or devices for controlling the propulsion of electrically-propelled vehicles, e.g. their traction-motor speed, to achieve a desired performance; Adaptation of control equipment on electrically-propelled vehicles for remote actuation from a stationary place, from alternative parts of the vehicle or from alternative vehicles of the same vehicle train for automatic control superimposed on human control to limit the acceleration of the vehicle, e.g. to prevent excessive motor current
B60W 50/02 - Ensuring safety in case of control system failures, e.g. by diagnosing, circumventing or fixing failures
B60W 50/029 - Adapting to failures or work around with other constraints, e.g. circumvention by avoiding use of failed parts
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
G01R 31/02 - Testing of electric apparatus, lines, or components for short-circuits, discontinuities, leakage, or incorrect line connection
88.
SYSTEMS AND METHODS FOR AUTONOMOUS VEHICLE NAVIGATION
Systems and methods are provided for autonomous vehicle navigation. The systems and methods may map a lane mark, may map a directional arrow, selectively harvest road information based on data quality, map road segment free spaces, map traffic lights and determine traffic light relevancy, and map traffic lights and associated traffic light cycle times.
System and techniques for vehicle environment modeling with a camera are described herein, A time-ordered sequence of images representative of a road surface may be obtained. An image from this sequence is a current image. A data set may then be provided to an artificial neural network (ANN) to produce a three-dimensional structure of a scene. Here, the data set includes a portion of the sequence of images that includes the current image, motion of the sensor from which the images were obtained, and an epipole. The road surface is then modeled using the three-dimensional structure of the scene.
Systems and methods may selectively collect information from a host vehicle. In one example, a method may include causing collection of navigational information associated with an environment traversed by the host vehicle; storing the collected navigational information; determining, based on an output of at least one navigational sensor, a location of the host vehicle; transmitting the determined location of the host vehicle to a server; receiving, from the server and in response to the transmitted determined location, a request for transmission of a selected subset of the navigational information collected by the host vehicle; and transmitting the selected subset of the navigational information to the server.
A system for navigating a host vehicle may include a at least one processing device. The at least one processing device may be programmed to receive, from an image capture device, at least one image representative of an environment of the host vehicle. The at least one processing device may also be programmed to analyze the at least one image to identify an object in the environment of the host vehicle. The at least one processing device may also be programmed to determine a location of the host vehicle. The at least one processing device may also be programmed to receive map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle. The at least one processing device may also be programmed to determine a distance from the host vehicle to the object based on at least the elevation information. The at least one processing device may further be programmed to determine a navigational action for the host vehicle based on the determined distance.
G01C 21/00 - Navigation; Navigational instruments not provided for in groups
G01C 21/28 - Navigation; Navigational instruments not provided for in groups specially adapted for navigation in a road network with correlation of data from several navigational instruments
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G05D 1/02 - Control of position or course in two dimensions
An autonomous system may selectively displace human driver control of a host vehicle. The system may receive an image representative of an environment of the host vehicle and detect an obstacle in the environment of the host vehicle based on analysis of the image. The system may monitor a driver input to a throttle, brake, and/or steering control associated with the host vehicle. The system may determine whether the driver input would result in the host vehicle navigating within a proximity buffer relative to the obstacle. If the driver input would not result in the host vehicle navigating within the proximity buffer, the system may allow the driver input to cause a corresponding change in one or more host vehicle motion control systems. If the driver input would result in the host vehicle navigating within the proximity buffer, the system may prevent the driver input from causing the corresponding change.
B60W 30/095 - Predicting travel path or likelihood of collision
B60W 10/06 - Conjoint control of vehicle sub-units of different type or different function including control of propulsion units including control of combustion engines
B60W 10/18 - Conjoint control of vehicle sub-units of different type or different function including control of braking systems
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
B60W 50/12 - Limiting control by the driver depending on vehicle state, e.g. interlocking means for the control input for preventing unsafe operation
B60W 50/08 - Interaction between the driver and the control system
B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
B60T 8/86 - Arrangements for adjusting wheel-braking force to meet varying vehicular or ground-surface conditions, e.g. limiting or varying distribution of braking force responsive to a speed condition, e.g. acceleration or deceleration wherein the brakes are automatically applied in accordance with a speed condition and having means for overriding the automatic braking device when a skid condition occurs
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
93.
SYSTEMS AND METHODS FOR ANONYMIZING NAVIGATION INFORMATION
Systems and methods are provided for anonymizing navigation data and generating an autonomous vehicle road navigation model with the anonymized data. A navigation system may receive data relating to a road section from a vehicle. The system may determine one or more motion representations associated with the vehicle and one or more road characteristics associated with the road section. The system may assemble navigation information relative to a first portion and relative to a second portion of the road section. The first and second portions may be spatially separated by a third portion. The system may transmit the navigation information relating to the first and second portions and forego transmitting information relating to the third portion. A server may receive the transmitted navigation information and assemble an autonomous vehicle road navigation model. The server may transmit the navigation model to one or more vehicles for use in autonomous navigation.
There may be provided a non-uniform Benes network, that may include a first Benes network portion that has a first number (k) of first inputs and k first outputs; a second Benes network portion that has a second number (j) of second inputs and j second outputs; wherein j is smaller than k; and a set of multiplexers that are coupled between a set of switches of an intermediate layer of the first Benes network portion and a first layer of the second Benes network layer.
H03K 19/173 - Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H03M 13/29 - Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
The present disclosure relates to systems and methods for host vehicle navigation. Disclosed systems and methods may navigate the host vehicle based on surroundings of the host vehicle, including pedestrians present in the host vehicle's environment, or based on passengers within the host vehicles. For example, the systems and methods may navigate the host vehicle based on sensed activities of passengers, automated negotiation with pedestrians, sensed pedestrian eye contact, a sensed facing direction of a pedestrian, based on a movement direction and a speed of a pedestrian, based on a sensed pedestrian in a vicinity of a crosswalk, or based on a sensed number of pedestrians.
B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
G01C 21/36 - Input/output arrangements for on-board computers
G05D 1/02 - Control of position or course in two dimensions
96.
FUSION FRAMEWORK OF NAVIGATION INFORMATION FOR AUTONOMOUS NAVIGATION
The present disclosure relates to systems and methods for navigating vehicles. In one implementation, at least one processing device may receive a first output from a first data source and a second output from a second data source; identify a representation of a target object in the first output; determine whether a characteristic of the target object triggers at least one navigational constraint; if the at least one navigational constraint is not triggered by the characteristic of the target object, verify the identification of the representation of the target object based on a combination of the first output and the second output; if the at least one navigational constraint is triggered by the characteristic of the target object, verify the identification of the representation of the target object based on the first output; and in response to the verification, cause at least one navigational change to the vehicle.
An imaging system is provided for a vehicle. In one implementation, the imaging system includes an imaging module, a first camera coupled to the imaging module, a second camera coupled to the imaging module, and a mounting assembly configured to attach the imaging module to the vehicle such that the first and second camera face outward with respect to the vehicle. The first camera has a first field of view and a first optical axis, and the second camera has a second field of view and a second optical axis. The first optical axis crosses the second optical axis in at least one crossing point of a crossing plane. The first camera is focused a first horizontal distance beyond the crossing point of the crossing plane and the second camera is focused a second horizontal distance beyond the crossing point of the crossing plane.
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
G01C 11/06 - Interpretation of pictures by comparison of two or more pictures of the same area
B60R 1/00 - Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
98.
DETECTION AND CLASSIFICATION SYSTEMS AND METHODS FOR AUTONOMOUS VEHICLE NAVIGATION
The present disclosure relates to systems and methods for road edge detection and mapping, for vehicle wheel identification and navigation based thereon, and for classification of objects as moving or non-moving. Such systems and methods may include the use of trained systems, such as one or more neural networks. Further, autonomous vehicle systems may incorporate aspects of one or more of the disclosed systems and methods.
A system for identifying features of a roadway traversed by a host vehicle may include at least one processor programmed to: receive a plurality of images representative of an environment of the host vehicle; recognize in the plurality of images a presence of a road feature associated with the roadway; determine a location of a first point associated with the road feature relative to a curve representative of a path of travel of the host vehicle; determine a location of a second point associated with the road feature relative to the curve; and cause transmission to a server remotely located from the host vehicle of a representation of a series of points associated with locations along the road feature. The second point may be spaced apart from the first point, and the series of points may include at least the first point and the second point.
A method for error correction and a system. The method may include opening a selected row of a memory bank out of multiple memory banks of a dynamic memory module; and while the selected row is open : (i) receiving selected data sub-blocks that are targeted to be written to the selected row, (ii) calculating selected error correction code sub-blocks that are related to the selected data sub-blocks, (iii) caching the selected error correction code sub-blocks in a cache memory that differs from the dynamic memory module and (iv) writing, to the selected row, the selected error correction code sub-blocks.