REAL-TIME SYSTEM FOR MULTI-MODAL 3D GEOSPATIAL MAPPING, OBJECT RECOGNITION, SCENE ANNOTATION AND ANALYTICS
First Claim
1. A navigation-capable vehicle, comprising:
- one or more processors, and, in communication with the one or more processors;
a two-dimensional image sensor;
a three-dimensional image sensor;
one or more sensors to determine motion, location, and orientation of the navigation-capable vehicle; and
one or more non-transitory machine accessible storage media comprising instructions to cause the navigation-capable vehicle to;
temporally and spatially align sensor data received from the two-dimensional sensor, the three-dimensional sensor, and the one or more motion, location, and orientation sensors;
generate a map representation of a real world environment in a frame of reference of the navigation-capable vehicle based on the temporally and spatially aligned sensor data;
recognize a plurality of visual features in the map representation using one or more computer vision algorithms; and
annotate one or more of the visual features in accordance with domain-specific business logic.
1 Assignment
0 Petitions
Accused Products
Abstract
A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
-
Citations
44 Claims
-
1. A navigation-capable vehicle, comprising:
-
one or more processors, and, in communication with the one or more processors; a two-dimensional image sensor; a three-dimensional image sensor; one or more sensors to determine motion, location, and orientation of the navigation-capable vehicle; and one or more non-transitory machine accessible storage media comprising instructions to cause the navigation-capable vehicle to; temporally and spatially align sensor data received from the two-dimensional sensor, the three-dimensional sensor, and the one or more motion, location, and orientation sensors; generate a map representation of a real world environment in a frame of reference of the navigation-capable vehicle based on the temporally and spatially aligned sensor data; recognize a plurality of visual features in the map representation using one or more computer vision algorithms; and annotate one or more of the visual features in accordance with domain-specific business logic. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A multi-sensor data collection, analysis, recognition, and visualization platform comprising instructions embodied in one or more non-transitory computer readable storage media and executable by one or more processors to cause a navigation-capable vehicle to:
-
receive sensor data from a plurality of sensors comprising a two dimensional image sensor, a three-dimensional image sensor, and one or more sensors to determine motion, location, and orientation of the navigation-capable vehicle; temporally and spatially align the sensor data received from the two-dimensional sensor, the three-dimensional sensor, and the one or more motion, location, and orientation sensors; generate a map representation of the real world surroundings of the navigation-capable vehicle based on the temporally and spatially aligned sensor data; recognize a plurality of visual features in the map representation by executing one or more computer vision algorithms; annotate one or more of the visual features in accordance with domain-specific business logic; and present a visualization of the annotated visual features on the navigation-capable vehicle. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A system for multi-sensor data collection, analysis, recognition, and visualization by a navigation-capable vehicle, the system comprising one or more computing devices configured to:
-
temporally and spatially align data received from a two-dimensional sensor, a three-dimensional sensor, and one or more motion, location, and orientation sensors; generate a map representation of the real world surroundings of the navigation-capable vehicle based on the temporally and spatially aligned sensor data; recognize a plurality of visual features in the map representation by executing one or more computer vision algorithms; estimate a navigation path for the navigation-capable vehicle; annotate one or more of the visual features in accordance with domain-specific business logic; and present a visualization of the annotated visual features on the navigation-capable vehicle. - View Dependent Claims (17, 18, 19, 20)
-
-
21. A mobile computing device including one or more processors, and, in communication with the one or more processors, one or more image sensors and one or more non-transitory machine accessible storage media,
wherein the one or more sensors are configured to obtain multi-dimensional image data including at least one of two-dimensional image data and three-dimensional image data, and wherein one or more non-transitory machine accessible storage media comprise instructions to cause the mobile computing device to perform recognition of a plurality of visual features based on a map representation of a geo-spatial area of real world surroundings of the mobile computing device generated based on temporal and spatial alignment of the multi-dimensional image data, and wherein the recognition of the plurality of visual features includes recognition of larger-scale objects, recognition of smaller-scale objects by performing context-free object identification and contextual object identification, and recognition of a complex object comprising a plurality of the smaller-scale objects.
- 29. An object/scene recognition system comprising instructions embodied in one or more non-transitory computer readable storage media executable by one or more processors to cause a mobile computing device to perform recognition of a plurality of visual features based on a map representation of a geo-spatial area of real world surroundings of the mobile computing device generated based on temporal and spatial alignment of multi-dimensional image data obtained by one or more sensors, wherein the multi-dimensional image data includes at least one of two-dimensional image data and three-dimensional image data, and wherein the recognition of the plurality of visual features includes recognition of larger-scale objects, recognition of smaller-scale objects by performing context-free object identification and contextual object identification, and recognition of a complex object comprising a plurality of the smaller-scale objects.
-
37. An object/scene recognition method comprising, with one or more mobile computing devices:
-
recognizing a plurality of visual features based on a map representation of a geo-spatial area of real world surroundings the one or more mobile computing devices generated based on temporal and spatial alignment of multi-dimensional image data obtained by one or more sensors, wherein the multi-dimensional image data includes at least one of two-dimensional image data and three-dimensional image data, wherein the recognizing of the plurality of visual features includes recognizing of larger-scale objects, recognizing of smaller-scale objects by performing context-free object identification and contextual object identification, and recognizing of a complex object comprising a plurality of the smaller-scale objects. - View Dependent Claims (38, 39, 40, 41, 42, 43, 44)
-
Specification