Systems and methods of scene and action capture using imaging system incorporating 3D LIDAR
First Claim
1. A method, comprising:
- obtaining a first 2D image of a first portion of a scene from a first view point;
obtaining a first 3D image of the first portion of the scene from the first view point, wherein the first 3D image includes a first point cloud, and wherein the first 3D image is obtained by a 3D imaging system;
determining an absolute location of the first view point;
associating an absolute location with at least some of the points included in the first point cloud, wherein associating an absolute location with at least some of the points in the first cloud includes correlating the at least some of the points in the first cloud with an absolute location reference, wherein the at least some of the points in the first point cloud associated with the locations are geolocated, and wherein the absolute location reference is an Earth-centered reference or absolute location information is provided by a global positioning system (GPS) receiver;
orthorectifying the first 2D image;
associating at least some of the points in the first point cloud with at least some of the pixels in the first 2D image;
fusing pixels in the first 2D image to points in the first point cloud to create a first fused image, wherein in the fused image each point in the first point cloud is associated with color information from the first 2D image.
1 Assignment
0 Petitions
Accused Products
Abstract
The present invention pertains to systems and methods for the capture of information regarding scenes using single or multiple three-dimensional LADAR systems. Where multiple systems are included, those systems can be placed in different positions about the imaged scene such that each LADAR system provides different viewing perspectives and/or angles. In accordance with further embodiments, the single or multiple LADAR systems can include two-dimensional focal plane arrays, in addition to three-dimensional focal plane arrays, and associated light sources for obtaining three-dimensional information about a scene, including information regarding the contours of the objects within the scene. Processing of captured image information can be performed in real time, and processed scene information can include data frames that comprise three-dimensional and two-dimensional image data.
196 Citations
34 Claims
-
1. A method, comprising:
- obtaining a first 2D image of a first portion of a scene from a first view point;
obtaining a first 3D image of the first portion of the scene from the first view point, wherein the first 3D image includes a first point cloud, and wherein the first 3D image is obtained by a 3D imaging system; determining an absolute location of the first view point; associating an absolute location with at least some of the points included in the first point cloud, wherein associating an absolute location with at least some of the points in the first cloud includes correlating the at least some of the points in the first cloud with an absolute location reference, wherein the at least some of the points in the first point cloud associated with the locations are geolocated, and wherein the absolute location reference is an Earth-centered reference or absolute location information is provided by a global positioning system (GPS) receiver; orthorectifying the first 2D image; associating at least some of the points in the first point cloud with at least some of the pixels in the first 2D image; fusing pixels in the first 2D image to points in the first point cloud to create a first fused image, wherein in the fused image each point in the first point cloud is associated with color information from the first 2D image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 27, 28, 29, 30, 31, 32)
- obtaining a first 2D image of a first portion of a scene from a first view point;
-
17. A method, comprising:
-
obtaining a first 2D image of a first portion of a scene; obtaining a first 3D image of the first portion of the scene, wherein the first 3D image includes a first point cloud a and wherein the first 3D image is obtained by a first flash LADAR system; determining an absolute location of a view point from which the first 3D image is taken; associating an absolute location with at least some of the points in the first point cloud, wherein the at least some of the points in the first point cloud are geolocated, and wherein the absolute location is reference is an Earth-centered reference or absolute location information is provided by a global positioning system (GPS) receiver; orthorectifying the first 2D image; associating a location with at least some of the pixels in the 2D image; fusing pixels in the first 2D image to points in the first point cloud to create a first fused image, wherein in the fused image each point in the first point cloud is associated with color information from the first 2D image. - View Dependent Claims (18, 19, 20, 21, 22, 23, 24, 25, 26, 33, 34)
-
Specification