Triangulation scanner and camera for augmented reality
First Claim
1. A method of combining a plurality of two-dimensional (2D) images into a three-dimensional (3D) image of an object, the method comprising steps of:
- providing a six-degree of freedom triangulation scanner having a retroreflector and a camera;
providing a laser tracker device having a distance meter, the device operable to direct a beam of light in a direction and measure a distance from the device to the retroreflector, the device further configured to determine 3D coordinates of the retroreflector;
determining with the device a first 3D coordinates of the retroreflector at a first position;
forming a first 2D image with the camera at the first position;
determining with the device a second 3D coordinates of the retroreflector at a second position;
forming a second 2D image with the camera at the second position;
determining a first common feature point in the first 2D image and the second 2D image;
determining 3D coordinates of the first common feature point based at least in part on the first 3D coordinates and the second 3D coordinates;
generating a first composite 3D image from the first 2D image and the second 2D image based at least in part on the 3D coordinates of the first common feature point; and
storing in a memory the first composite 3D image.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and system of combining 2D images into a 3D image. The method includes providing a coordinate measurement device and a triangulation scanner having an integral camera associated therewith, the scanner being separate from the coordinate measurement device. In a first instance, the coordinate measurement device determines the position and orientation of the scanner and the integral camera captures a first 2D image. In a second instance, the scanner is moved, the coordinate measurement device determines the position and orientation of the scanner, and the integral camera captures a second 2D image. A common feature point in the first and second images is found and is used, together with the first and second images and the positions and orientations of the scanner in the first and second instances, to create the 3D image.
71 Citations
19 Claims
-
1. A method of combining a plurality of two-dimensional (2D) images into a three-dimensional (3D) image of an object, the method comprising steps of:
-
providing a six-degree of freedom triangulation scanner having a retroreflector and a camera; providing a laser tracker device having a distance meter, the device operable to direct a beam of light in a direction and measure a distance from the device to the retroreflector, the device further configured to determine 3D coordinates of the retroreflector; determining with the device a first 3D coordinates of the retroreflector at a first position; forming a first 2D image with the camera at the first position; determining with the device a second 3D coordinates of the retroreflector at a second position; forming a second 2D image with the camera at the second position; determining a first common feature point in the first 2D image and the second 2D image; determining 3D coordinates of the first common feature point based at least in part on the first 3D coordinates and the second 3D coordinates; generating a first composite 3D image from the first 2D image and the second 2D image based at least in part on the 3D coordinates of the first common feature point; and storing in a memory the first composite 3D image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method of combining a plurality of two-dimensional (2D) images into a three-dimensional (3D) image of an object, the method comprising:
-
providing a six-degree of freedom triangulation scanner having a retroreflector and a camera; providing a laser tracker device having a distance meter, the device operable to direct a beam of light in a direction and measure a distance from the device to the retroreflector, the device further configured to determine 3D coordinates of the retroreflector; determining with the device a first 3D coordinates of the retroreflector at a first position; forming a first 2D image with the camera at the first position; determining with the device a second 3D coordinates of the retroreflector at a second position; forming a second 2D image with the camera at the second position; determining a first common feature point in the first 2D image and the second 2D image; determining 3D coordinates of the first common feature point based at least in part on the first 3D coordinates and the second 3D coordinates; generating a first composite 3D image from the first 2D image and the second 2D image based at least in part on the 3D coordinates of the first common feature point storing in a memory the first composite 3D image; and determining a plurality of 3D coordinates of the object with the triangulation scanner; and determining a scale of the composite 3D image based at least partially on the plurality of 3D coordinates.
-
-
10. A system for combining a plurality of two-dimensional (2D) images into a three-dimensional (3D) image of an object, the system comprising:
-
a six-degree of freedom triangulation scanner having a retroreflector and a camera; a laser tracker device having a distance meter, the device operable to direct a beam of light to a direction and measure a distance from the device to the retroreflector, the device further operable to determine a 3D coordinates of the triangulation scanner; and one or more processors responsive to nontransitory executable computer instructions, the one or more processors electrically coupled to the triangulation scanner and the device, the nontransitory executable computer instructions comprising; determining with the device a first 3D coordinates of the retroreflector at a first position; forming a first 2D image with the camera at the first position; determining with the device a second 3D coordinates of the triangulation scanner at a second position; forming a second 2D image with the camera at the second position; determining a first common feature point in the first 2D image and the second 2D image; determining 3D coordinates of the first common feature point based at least in part on the first set of 3D coordinates and the second set of 3D coordinates; creating a first composite 3D image from the first 2D image and the second 2D image based at least in part on the 3D coordinates of the first common feature point in the first frame of reference; and storing in a memory the first composite 3D image. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19)
-
Specification