Modifying behavior of autonomous vehicles based on sensor blind spots and limitations
First Claim
1. A method comprising:
- generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view;
receiving weather information, the weather information including one or more of reports, radar information, forecasts and real-time measurements concerning actual or expected weather conditions in the vehicle'"'"'s environment;
adjusting one or more characteristics of the plurality of 3D models based on the received weather information to account for an impact of the actual or expected weather conditions on a range of the field of view for one or more of the plurality of sensors;
after the adjusting, aggregating, by one or more processors, the plurality of 3D models to generate a comprehensive 3D model, wherein the comprehensive 3D model indicates an extent of an aggregated field of view for the plurality of sensors;
combining the comprehensive 3D model with detailed map information using probability data of the detailed map information indicating a probability of detecting objects at various locations in the detailed map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and
using the combined model to maneuver the vehicle.
5 Assignments
0 Petitions
Accused Products
Abstract
Models can be generated of a vehicle'"'"'s view of its environment and used to maneuver the vehicle. This view need not include what objects or features the vehicle is actually seeing, but rather those areas that the vehicle is able to observe using its sensors if the sensors were completely un-occluded. For example, for each of a plurality of sensors of the object detection component, a computer may generate an individual 3D model of that sensor'"'"'s field of view. Weather information is received and used to adjust one or more of the models. After this adjusting, the models may be aggregated into a comprehensive 3D model. The comprehensive model may be combined with detailed map information indicating the probability of detecting objects at different locations. The model of the vehicle'"'"'s environment may be computed based on the combined comprehensive 3D model and detailed map information.
66 Citations
20 Claims
-
1. A method comprising:
-
generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view; receiving weather information, the weather information including one or more of reports, radar information, forecasts and real-time measurements concerning actual or expected weather conditions in the vehicle'"'"'s environment; adjusting one or more characteristics of the plurality of 3D models based on the received weather information to account for an impact of the actual or expected weather conditions on a range of the field of view for one or more of the plurality of sensors; after the adjusting, aggregating, by one or more processors, the plurality of 3D models to generate a comprehensive 3D model, wherein the comprehensive 3D model indicates an extent of an aggregated field of view for the plurality of sensors; combining the comprehensive 3D model with detailed map information using probability data of the detailed map information indicating a probability of detecting objects at various locations in the detailed map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and using the combined model to maneuver the vehicle. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A system comprising:
-
non-transitory storage medium storing detailed map information including probability data indicating a probability of detecting objects at various locations in the detailed map information from various possible locations of a vehicle; and one or more processors configured to; generate, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view; receive weather information, the weather information including one or more of reports, radar information, forecasts and real-time measurements concerning actual or expected weather conditions in the vehicle'"'"'s environment; adjust one or more characteristics of the plurality of 3D models based on the received weather information to account for an impact of the actual or expected weather conditions on a range of the field of view for one or more of the plurality of sensors; after the adjusting, aggregate the plurality of 3D models to generate a comprehensive 3D model, wherein the comprehensive 3D model indicates an extent of an aggregated field of view for the plurality of sensors; combine the comprehensive 3D model with the detailed map information using the probability data to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and use the combined model with detailed map information to maneuver the vehicle. - View Dependent Claims (14, 15, 16, 17, 18)
-
-
19. A tangible, non-transitory computer-readable storage medium on which computer readable instructions of a program are stored, the instructions, when executed by one or more processors, cause the one or more processors to perform a method, the method comprising:
-
generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view; receiving weather information, the weather information including one or more of reports, radar information, forecasts and real-time measurements concerning actual or expected weather conditions in the vehicle'"'"'s environment; adjusting one or more characteristics of the plurality of 3D models based on the received weather information to account for an impact of the actual or expected weather conditions on a range of the field of view for one or more of the plurality of sensors; after the adjusting, aggregating the plurality of 3D models to generate a comprehensive 3D model, wherein the comprehensive 3D model indicates an extent of an aggregated field of view for the plurality of sensors; combining the comprehensive 3D model with detailed map information using probability data of the detailed map information indicating a probability of detecting objects at various locations in the detailed map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and using the combined model to maneuver the vehicle. - View Dependent Claims (20)
-
Specification