3-D MODEL GENERATION
First Claim
1. A non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause a computing device to:
- capture, using an image sensor, depth data of an object from a plurality of viewpoints;
capture, using a first camera, first image data of the object from each viewpoint of the plurality of viewpoints, a preregistered process aligning two-dimensional (2D) coordinates of the first image data for the first camera and three-dimensional (3D) coordinates of the depth data for the image sensor;
capture, using a second camera, second image data of the object from each viewpoint of the plurality of viewpoints, the second image data being a higher resolution relative to the first image data;
extract, using a feature extraction algorithm, first features from the first image data of each viewpoint captured by the first camera;
extract, using the feature extraction algorithm, second features from the second image data of each viewpoint captured by the second camera;
determine matching features between the first features and the second features;
determine, using a projective mapping algorithm, a first mapping between the first image data and the second image data for each viewpoint using the matching features, the first mapping providing 3D coordinates for the second features of the second image data captured by the second camera;
determine, for second imaged data of each viewpoint, matching second features between adjacent viewpoints, a first viewpoint having a first field of view at least partially overlapping a second field of view of an adjacent second viewpoint;
determine, for the second imaged data of each viewpoint, a second mapping between the second image data of adjacent viewpoints using a Euclidean mapping algorithm;
generate, using the depth data, a 3D point cloud for the object;
generate, using a mesh reconstruction algorithm, a triangular mesh of the object from the 3D point cloud; and
generate a 3D model of the object by projecting, based at least in part on the 3D coordinates for the second features from first mapping, the second image data onto the triangular mesh for each viewpoint of the plurality of viewpoints using the second mapping.
1 Assignment
0 Petitions
Accused Products
Abstract
Various embodiments provide for the generation of 3D models of objects. For example, depth data and color image data can be captured from viewpoints around an object using a sensor. A camera having a higher resolution can simultaneously capture image data of the object. Features between images captured by the image sensor and the camera can be extracted and compared to determine a mapping between the camera and the image. Once the mapping between the camera and the image sensor is determined, a second mapping between adjacent viewpoints can be determined for each image around the object. In this example, each viewpoint overlaps with an adjacent viewpoint and features extracted from two overlapping viewpoints are matched to determine their relative alignment. Accordingly, a 3D point cloud can be generated and the images captured by the camera can be projected on the surface of the 3D point cloud to generate the 3D model.
85 Citations
20 Claims
-
1. A non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause a computing device to:
-
capture, using an image sensor, depth data of an object from a plurality of viewpoints; capture, using a first camera, first image data of the object from each viewpoint of the plurality of viewpoints, a preregistered process aligning two-dimensional (2D) coordinates of the first image data for the first camera and three-dimensional (3D) coordinates of the depth data for the image sensor; capture, using a second camera, second image data of the object from each viewpoint of the plurality of viewpoints, the second image data being a higher resolution relative to the first image data; extract, using a feature extraction algorithm, first features from the first image data of each viewpoint captured by the first camera; extract, using the feature extraction algorithm, second features from the second image data of each viewpoint captured by the second camera; determine matching features between the first features and the second features; determine, using a projective mapping algorithm, a first mapping between the first image data and the second image data for each viewpoint using the matching features, the first mapping providing 3D coordinates for the second features of the second image data captured by the second camera; determine, for second imaged data of each viewpoint, matching second features between adjacent viewpoints, a first viewpoint having a first field of view at least partially overlapping a second field of view of an adjacent second viewpoint; determine, for the second imaged data of each viewpoint, a second mapping between the second image data of adjacent viewpoints using a Euclidean mapping algorithm; generate, using the depth data, a 3D point cloud for the object; generate, using a mesh reconstruction algorithm, a triangular mesh of the object from the 3D point cloud; and generate a 3D model of the object by projecting, based at least in part on the 3D coordinates for the second features from first mapping, the second image data onto the triangular mesh for each viewpoint of the plurality of viewpoints using the second mapping. - View Dependent Claims (2, 3, 4)
-
-
5. A computer-implemented method, comprising:
-
capturing, using an image sensor, depth information of an object; capturing, using a first camera, a first image of the object, three-dimensional (3D) coordinates of the depth information being aligned with two-dimensional (2D) coordinates of the first image; capturing, using a second camera, a second image of the object; detecting first features in the first image captured by the first camera; detecting second features in the second image captured by the camera; determining matching features between the first features and the second features; and determining 3D coordinates for the second features of the second image based at least in part on the matching features between the first features and the second features. - View Dependent Claims (6, 7, 8, 9, 10, 11, 12)
-
-
13. A computing system, comprising:
-
a processor; an image sensor; a first camera a second camera; and memory including instructions that, when executed by the processor, cause the computing system to; capture, using the image sensor, depth information of an object; capture, using the first camera, a first image of the object, three-dimensional (3D) coordinates of the depth information being aligned with two-dimensional (2D) coordinates of the first image; capture, using the second camera, a second image of the object; detect first features in the first image captured by the first camera; detect second features in the second image captured by the second camera; determine matching features between the first features and the second features; and determine 3D coordinates for the second features of the second image based at least in part on the matching features between the first features and the second features. - View Dependent Claims (14, 15, 16, 17, 18, 19, 20)
-
Specification