Dynamically adjustable situational awareness interface for control of unmanned vehicles
First Claim
Patent Images
1. An apparatus, comprising:
- an image collection module that monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to an area of a scene collected from an image data set, the image collection module including an object identifier having a classifier to determine object types detected in the area of the scene based on probabilities associated with a frequency band emitted from the object;
a situational awareness interface (SAI) to render a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection module and to receive operator commands for an unmanned vehicle (UV) that interacts with the scene, the SAI receiving feedback from the operator to allocate resolution bandwidth to an object within the area of the scene; and
a bandwidth detector that, based on the feedback from the operator, renders the object in the scene at a first resolution and other objects in the scene at a second resolution where the first resolution is higher than the second resolution.
1 Assignment
0 Petitions
Accused Products
Abstract
An apparatus includes an image collection module that monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to at least one object in a scene collected from an image data set. A situational awareness interface (SAI) renders a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection module and receives operator commands for an unmanned vehicle (UV) that interacts with the scene.
19 Citations
18 Claims
-
1. An apparatus, comprising:
-
an image collection module that monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to an area of a scene collected from an image data set, the image collection module including an object identifier having a classifier to determine object types detected in the area of the scene based on probabilities associated with a frequency band emitted from the object; a situational awareness interface (SAI) to render a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection module and to receive operator commands for an unmanned vehicle (UV) that interacts with the scene, the SAI receiving feedback from the operator to allocate resolution bandwidth to an object within the area of the scene; and a bandwidth detector that, based on the feedback from the operator, renders the object in the scene at a first resolution and other objects in the scene at a second resolution where the first resolution is higher than the second resolution. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system, comprising:
-
a first sensor that generates an electro-optical (EO) image data set characterizing a scene; a second sensor that generates a Laser Illuminated Detection and Ranging (LIDAR) image data set characterizing the scene; an image collection module that dynamically regulates an amount of data and resolution to be allocated to at least one object within an area of a scene from the EO image data set and the LIDAR image data set based on at least one parameter to generate a fused image data set to provide a 3-D video of the scene, the image collection module including an object identifier having a classifier to determine object types detected in the area of the scene based on probabilities associated with a frequency band emitted from the object; a situational awareness interface (SAI) to render the 3-D video of the scene from the fused image data set to an operator and to receive operator commands for an unmanned vehicle (UV) that interacts with the scene, the SAI receiving feedback from the operator to allocate resolution bandwidth to an object within the area of the scene; and a bandwidth detector that, based on the feedback from the operator, renders the object in the scene at a first resolution and other objects in the scene at a second resolution where the first resolution is higher than the second resolution. - View Dependent Claims (11, 12, 13, 14, 16)
-
-
15. The system of 10, wherein the classifier determines the object types based on probabilities associated with a shape of the object.
-
17. A method, comprising:
-
receiving image data sets, via a controller, from at least two sensors, fusing the image data sets, via the controller, to generate a 3-D scene for an operator of an unmanned vehicle (UV) based on the image data sets; determining, via the controller, an available bandwidth to render the scene at an interface for the operator; adjusting, via the controller, the resolution of an area in the scene based on the available bandwidth; receiving feedback from the operator to allocate resolution bandwidth to an object within the area of the scene; rendering, based on the feedback from the operator, the object in the scene at a resolution higher than a resolution of other objects in the scene; and classifying objects in the scene to determine object types based on probabilities associated with a frequency band emitted from the object. - View Dependent Claims (18)
-
Specification