Scan colorization with an uncalibrated camera
First Claim
1. A method of colorizing a three-dimensional (3D) point cloud, the method comprising:
- receiving the 3D point cloud, the 3D point cloud acquired by a 3D imaging device positioned at a first location and including 3D coordinates and intensity data for a set of points representing surfaces of one or more objects;
receiving a two-dimensional (2D) color image of the one or more objects acquired by a camera positioned at an approximately known second location and an approximately known orientation, the camera having an approximately known set of intrinsic optical parameters;
creating a 2D intensity image of the 3D point cloud using the intensity data by projecting the 3D point cloud onto a 2D image area from a view point of the second location based on the set of intrinsic optical parameters, the second location, and the orientation of the camera, the 2D image area having a size that is same as a size of the 2D color image;
generating a set of refined camera parameters by matching the 2D intensity image and the 2D color image;
projecting the set of points of the 3D point cloud onto the 2D image area using the set of refined camera parameters;
creating a depth buffer for the 3D point cloud, the depth buffer having an image area of a size that is same as the size of the 2D color image and a pixel size that is same as a pixel size of the 2D color image, the depth buffer including depth data for the set of points in the 3D point cloud as the set of points are projected onto pixels of the depth buffer using the set of refined camera parameters;
determining a foreground depth for each respective pixel of the depth buffer by detecting a closest point among a subset of the set of points corresponding to the respective pixel; and
coloring the point cloud by, for each respective point of the set of points;
comparing a depth of the respective point with a foreground depth of a corresponding pixel in the depth buffer;
upon determining that the depth of the respective point is within a predetermined threshold value from the foreground depth of the corresponding pixel, assigning a color of a corresponding pixel in the 2D color image to the respective point; and
upon determining that the depth of the respective point differs from the foreground depth of the corresponding pixel by an amount greater than the predetermined threshold value, not assigning any color to the respective point.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of colorizing a 3D point cloud includes receiving the 3D point cloud, receiving a 2D color image acquired by a camera, creating a 2D intensity image of the 3D point cloud based on intrinsic and extrinsic parameters of the camera, generating a set of refined camera parameters by matching the 2D intensity image and the 2D color image, creating a depth buffer for the 3D point cloud using the set of refined camera parameters, determining a foreground depth for each respective pixel of the depth buffer, and coloring the point cloud by, for each respective point of the 3D point cloud: upon determining that the respective point is in the foreground, assigning a color of a corresponding pixel in the 2D color image to the respective point; and upon determining that the respective point is not in the foreground, not assigning any color to the respective point.
-
Citations
22 Claims
-
1. A method of colorizing a three-dimensional (3D) point cloud, the method comprising:
-
receiving the 3D point cloud, the 3D point cloud acquired by a 3D imaging device positioned at a first location and including 3D coordinates and intensity data for a set of points representing surfaces of one or more objects; receiving a two-dimensional (2D) color image of the one or more objects acquired by a camera positioned at an approximately known second location and an approximately known orientation, the camera having an approximately known set of intrinsic optical parameters; creating a 2D intensity image of the 3D point cloud using the intensity data by projecting the 3D point cloud onto a 2D image area from a view point of the second location based on the set of intrinsic optical parameters, the second location, and the orientation of the camera, the 2D image area having a size that is same as a size of the 2D color image; generating a set of refined camera parameters by matching the 2D intensity image and the 2D color image; projecting the set of points of the 3D point cloud onto the 2D image area using the set of refined camera parameters; creating a depth buffer for the 3D point cloud, the depth buffer having an image area of a size that is same as the size of the 2D color image and a pixel size that is same as a pixel size of the 2D color image, the depth buffer including depth data for the set of points in the 3D point cloud as the set of points are projected onto pixels of the depth buffer using the set of refined camera parameters; determining a foreground depth for each respective pixel of the depth buffer by detecting a closest point among a subset of the set of points corresponding to the respective pixel; and coloring the point cloud by, for each respective point of the set of points; comparing a depth of the respective point with a foreground depth of a corresponding pixel in the depth buffer; upon determining that the depth of the respective point is within a predetermined threshold value from the foreground depth of the corresponding pixel, assigning a color of a corresponding pixel in the 2D color image to the respective point; and upon determining that the depth of the respective point differs from the foreground depth of the corresponding pixel by an amount greater than the predetermined threshold value, not assigning any color to the respective point. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method of camera calibration with respect to a three-dimensional (3D) imaging device, the method comprising:
-
receiving a 3D point cloud acquired by the 3D imaging device positioned at a first location and including 3D coordinates and intensity data for a set of points representing surfaces of one or more objects; receiving a two-dimensional (2D) color image of the one or more objects acquired by a camera positioned at an approximately known second location and an approximately known orientation, the camera having an approximately known set of intrinsic optical parameters; creating a 2D intensity image of the 3D point cloud using the intensity data by projecting the 3D point cloud onto a 2D image area from a view point of the second location based on the set of intrinsic optical parameters of the camera, the second location, and the orientation of the camera, the 2D image area having a size that is same as a size of the 2D color image; and performing image matching between the 2D intensity image and the 2D color image to generate a set of refined camera parameters for projecting the 3D point cloud onto the 2D image area, wherein the set of refined camera parameters includes a translation parameter, a rotation parameter, and a scaling parameter. - View Dependent Claims (13, 14)
-
-
15. A method of camera calibration with respect to a three-dimensional (3D) imaging device, the method comprising:
-
receiving a 3D point cloud acquired by the 3D imaging device, the 3D point cloud including 3D coordinates and intensity data for a set of points representing surfaces of one or more objects; receiving a plurality of two-dimensional (2D) color images of the one or more objects acquired by a camera positioned at a plurality of corresponding locations and a plurality of corresponding orientations, the camera having an approximately known set of intrinsic optical parameters; generating a plurality of sets of refined camera parameters by, for each respective 2D color image of the plurality of 2D color images; creating a respective 2D intensity image of the 3D point cloud using the intensity data by projecting the 3D point cloud onto a 2D image area from a view point of the camera based on a respective location and respective orientation, and the set of intrinsic optical parameters of the camera, the 2D image area having a size that is same as a size of the 2D color image; and performing image matching between the respective 2D intensity image and the respective 2D color image to generate a respective set of refined camera parameters for projecting the 3D point cloud onto the 2D image area; and selecting one set of the refined camera parameters from the plurality of sets of refined camera parameters as a final set of refined camera parameters according to a predetermined criteria. - View Dependent Claims (16, 17, 18)
-
-
19. A method of colorizing a three-dimensional (3D) point cloud, the method comprising:
-
receiving the 3D point cloud, the 3D point cloud acquired by a 3D imaging device positioned at a first location and including 3D coordinates for a set of points representing surfaces of one or more objects; receiving a two-dimensional (2D) color image of the one or more objects acquired by a camera positioned at a second location different from the first location and having an orientation; receiving a set of camera parameters for projecting the 3D point cloud onto a 2D image area from a view point of the second location, the 2D image area having a size that is same as a size of the 2D color image; projecting the set of points of the 3D point cloud onto the 2D image area using the set of camera parameters; creating a depth buffer for the 3D point cloud, the depth buffer having an image area of a size that is same as the size of the 2D color image and a pixel size that is same as a pixel size of the 2D color image, the depth buffer including depth data for the set of points in the 3D point cloud as the set of points are projected onto pixels of the depth buffer using the set of camera parameters; determining a foreground depth for each respective pixel of the depth buffer by detecting a closest point among a subset of the set of points corresponding to the respective pixel; and coloring the point cloud by, for each respective point of the set of points; comparing a depth of the respective point with a foreground depth of a corresponding pixel in the depth buffer; upon determining that the depth of the respective point is within a predetermined threshold value from the foreground depth of the corresponding pixel, assigning a color of a corresponding pixel in the 2D color image to the respective point; and upon determining that the depth of the respective point differs from the foreground depth of the corresponding pixel by an amount greater than the predetermined threshold value, not assigning any color to the respective point. - View Dependent Claims (20, 21, 22)
-
Specification