Triangulation scanner and camera for augmented reality
First Claim
1. A method of combining a plurality of two-dimensional (2D) images into a three-dimensional (3D) image of an object, the method comprising steps of:
- providing a six-degree of freedom (six-DOF) triangulation scanner having a retroreflector and an integral augmented reality (AR) camera;
providing a coordinate measurement device having a device frame of reference, the device including a distance meter and a processor, the device operable to direct a beam of light to a direction and measure a first angle of rotation about a first axis and a second angle of rotation about a second axis, the distance meter operable to measure a distance from the device to the retroreflector based at least in part on a portion of the beam of light being reflected by the retroreflector and received by an optical detector and on a speed of light in air, the processor configured to determine, in the device frame of reference, a 3D coordinates of the triangulation scanner;
determining with the device a first set of 3D coordinates of the triangulation scanner at a first position;
forming a first 2D image with the camera at the first position;
moving the triangulation scanner to a second position;
determining with the device a second set of 3D coordinates of the triangulation scanner at the second position;
forming a second 2D image with the camera at the second position;
determining a first cardinal point in common between the first and second 2D images, the first cardinal point having a first location on the first 2D image and a second location on the second 2D image;
determining 3D coordinates of the first cardinal point in a first frame of reference based at least in part on the first set of 3D coordinates, the second set of 3D coordinates, the first location, and the second location;
creating the 3D image as a first composite 3D image from the first 2D image and the second 2D image based at least in part on the first 2D image, the second 2D image, and the 3D coordinates of the first cardinal point in the first frame of reference; and
storing in a memory the first composite 3D image.
1 Assignment
0 Petitions
Accused Products
Abstract
A method of combining 2D images into a 3D image includes providing a coordinate measurement device and a triangulation scanner having an integral camera associated therewith, the scanner being separate from the coordinate measurement device. In a first instance, the coordinate measurement device determines the position and orientation of the scanner and the integral camera captures a first 2D image. In a second instance, the scanner is moved, the coordinate measurement device determines the position and orientation of the scanner, and the integral camera captures a second 2D image. A cardinal point common to the first and second images is found and is used, together with the first and second images and the positions and orientations of the scanner in the first and second instances, to create the 3D image.
34 Citations
20 Claims
-
1. A method of combining a plurality of two-dimensional (2D) images into a three-dimensional (3D) image of an object, the method comprising steps of:
-
providing a six-degree of freedom (six-DOF) triangulation scanner having a retroreflector and an integral augmented reality (AR) camera; providing a coordinate measurement device having a device frame of reference, the device including a distance meter and a processor, the device operable to direct a beam of light to a direction and measure a first angle of rotation about a first axis and a second angle of rotation about a second axis, the distance meter operable to measure a distance from the device to the retroreflector based at least in part on a portion of the beam of light being reflected by the retroreflector and received by an optical detector and on a speed of light in air, the processor configured to determine, in the device frame of reference, a 3D coordinates of the triangulation scanner; determining with the device a first set of 3D coordinates of the triangulation scanner at a first position; forming a first 2D image with the camera at the first position; moving the triangulation scanner to a second position; determining with the device a second set of 3D coordinates of the triangulation scanner at the second position; forming a second 2D image with the camera at the second position; determining a first cardinal point in common between the first and second 2D images, the first cardinal point having a first location on the first 2D image and a second location on the second 2D image; determining 3D coordinates of the first cardinal point in a first frame of reference based at least in part on the first set of 3D coordinates, the second set of 3D coordinates, the first location, and the second location; creating the 3D image as a first composite 3D image from the first 2D image and the second 2D image based at least in part on the first 2D image, the second 2D image, and the 3D coordinates of the first cardinal point in the first frame of reference; and storing in a memory the first composite 3D image. - View Dependent Claims (3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
2. The method of step 1, wherein the triangulation scanner comprises one of a laser line probe and a structured area scanner.
-
14. A system for combining a plurality of two-dimensional (2D) images into a three-dimensional (3D) image of an object, the system comprising:
-
a three-degree of freedom (six-DOF) triangulation scanner having a retroreflector and an integral augmented reality (AR) camera; a coordinate measurement device having a device frame of reference, the device including a distance meter and a processor, the device operable to direct a beam of light to a direction and measure a first angle of rotation about a first axis and a second angle of rotation about a second axis, the distance meter operable to measure a distance from the device to the retroreflector based at least in part on a portion of the beam of light being reflected by the retroreflector and received by an optical detector and on a speed of light in air, the processor configured to determine, in the device frame of reference, a 3D coordinates of the triangulation scanner; one or more processors responsive to executable computer instructions, the one or more processors electrically coupled to the triangulation scanner and the device, the executable computer instructions comprising; determining with the device a first set of 3D coordinates of the triangulation scanner at a first position; forming a first 2D image with the camera at the first position; moving the triangulation scanner to a second position; determining with the device a second set of 3D coordinates of the triangulation scanner at the second position; forming a second 2D image with the camera at the second position; determining a first cardinal point in common between the first and second 2D images, the first cardinal point having a first location on the first 2D image and a second location on the second 2D image; determining 3D coordinates of the first cardinal point in a first frame of reference based at least in part on the first set of 3D coordinates, the second set of 3D coordinates, the first location, and the second location; creating the 3D image as a first composite 3D image from the first 2D image and the second 2D image based at least in part on the first 2D image, the second 2D image, and the 3D coordinates of the first cardinal point in the first frame of reference; and storing in a memory the first composite 3D image. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification