MODIFYING BEHAVIOR OF AUTONOMOUS VEHICLES BASED ON SENSOR BLIND SPOTS AND LIMITATIONS
First Claim
1. A method comprising:
- generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view;
aggregating, by one or more processors, the plurality of 3D models to generate a comprehensive model, wherein the comprehensive model indicates an extent of an aggregated field of view for the plurality of sensors;
combining the comprehensive model with map information corresponding to environmental data for the vehicle'"'"'s environment obtained at a previous point in time using probability data of the map information indicating a probability of detecting objects at various locations in the map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and
using the combined model to maneuver the vehicle.
6 Assignments
0 Petitions
Accused Products
Abstract
Models can be generated of a vehicle'"'"'s view of its environment and used to maneuver the vehicle. This view need not include what objects or features the vehicle is actually seeing, but rather those areas that the vehicle is able to observe using its sensors if the sensors were completely un-occluded. For example, for each of a plurality of sensors of the object detection component, a computer may generate an individual 3D model of that sensor'"'"'s field of view. Weather information is received and used to adjust one or more of the models. After this adjusting, the models may be aggregated into a comprehensive 3D model. The comprehensive model may be combined with detailed map information indicating the probability of detecting objects at different locations. The model of the vehicle'"'"'s environment may be computed based on the combined comprehensive 3D model and detailed map information.
-
Citations
20 Claims
-
1. A method comprising:
-
generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view; aggregating, by one or more processors, the plurality of 3D models to generate a comprehensive model, wherein the comprehensive model indicates an extent of an aggregated field of view for the plurality of sensors; combining the comprehensive model with map information corresponding to environmental data for the vehicle'"'"'s environment obtained at a previous point in time using probability data of the map information indicating a probability of detecting objects at various locations in the map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and using the combined model to maneuver the vehicle. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A system comprising:
-
one or more processors configured to; generate, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view; aggregate, by one or more processors, the plurality of 3D models to generate a comprehensive model, wherein the comprehensive model indicates an extent of an aggregated field of view for the plurality of sensors; combine the comprehensive model with map information corresponding to environmental data for the vehicle'"'"'s environment obtained at a previous point in time using probability data of the map information indicating a probability of detecting objects at various locations in the map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and use the combined model to maneuver the vehicle. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
-
20. A non-transitory, computer readable recording medium on which instructions are stored, the instructions, when executed by one or more processors cause the one or more processors to perform a method, the method comprising:
-
generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle'"'"'s environment, a 3D model of the given sensor'"'"'s field of view; aggregating the plurality of 3D models to generate a comprehensive model, wherein the comprehensive model indicates an extent of an aggregated field of view for the plurality of sensors; combining the comprehensive model with map information corresponding to environmental data for the vehicle'"'"'s environment obtained at a previous point in time using probability data of the map information indicating a probability of detecting objects at various locations in the map information from various possible locations of the vehicle to produce a combined model annotated with information identifying a first portion of the environment as occupied by an object, a second portion of the environment as unoccupied by an object, and a third portion of the environment as unobserved by any of the plurality of sensors; and using the combined model to maneuver the vehicle.
-
Specification