Depth Mapping Based on Pattern Matching and Stereoscopic Information
First Claim
Patent Images
1. A method for depth mapping, comprising:
- projecting a pattern of optical radiation onto an object;
capturing a first image of the pattern on the object using a first image sensor, and processing the first image to generate pattern-based depth data with respect to the object;
capturing a second image of the object using a second image sensor, and processing the second image together with another image to generate stereoscopic depth data with respect to the object; and
combining the pattern-based depth data with the stereoscopic depth data to create a depth map of the object.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
383 Citations
46 Claims
-
1. A method for depth mapping, comprising:
-
projecting a pattern of optical radiation onto an object; capturing a first image of the pattern on the object using a first image sensor, and processing the first image to generate pattern-based depth data with respect to the object; capturing a second image of the object using a second image sensor, and processing the second image together with another image to generate stereoscopic depth data with respect to the object; and combining the pattern-based depth data with the stereoscopic depth data to create a depth map of the object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method for depth mapping, comprising:
-
receiving at least one image of an object, captured by an image sensor, the image comprising multiple pixels; processing the at least one image to generate depth data comprising multiple candidate depth coordinates for each of a plurality of the pixels; applying a weighted voting process to the depth data in order to select one of the candidate depth coordinates at each pixel; and outputting a depth map of the object comprising the selected one of the candidate depth coordinates at each pixel. - View Dependent Claims (14, 15, 16, 17)
-
-
18. Apparatus for depth mapping, comprising:
-
an illumination subassembly, which is configured to project a pattern of optical radiation onto an object; a first image sensor, which is configured to capture a first image of the pattern on the object; at least a second image sensor, which is configured to capture at least a second image of the object; and a processor, which is configured to process the first image to generate pattern-based depth data with respect to the object, to process a pair of images including at least the second image to generate stereoscopic depth data with respect to the object, and to combine the pattern-based depth data with the stereoscopic depth data to create a depth map of the object. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29)
-
-
30. Apparatus for depth mapping, comprising:
-
at least one image sensor, which is configured to capture at least one image of an object, the image comprising multiple pixels; and a processor, which is configured to process the at least one image to generate depth data comprising multiple candidate depth coordinates for each of a plurality of the pixels, to apply a weighted voting process to the depth data in order to select one of the candidate depth coordinates at each pixel, and to output a depth map of the object comprising the selected one of the candidate depth coordinates at each pixel. - View Dependent Claims (31, 32, 33, 34)
-
-
35. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a processor, cause the processor to receive a first image of a pattern that has been projected onto an object and to receive at least a second image of the object, and to process the first image to generate pattern-based depth data with respect to the object, to process a pair of images including at least the second image to generate stereoscopic depth data with respect to the object, and to combine the pattern-based depth data with the stereoscopic depth data to create a depth map of the object.
-
36. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a processor, cause the processor to receive at least one image of an object, the image comprising multiple pixels, to process the at least one image to generate depth data comprising multiple candidate depth coordinates for each of a plurality of the pixels, to apply a weighted voting process to the depth data in order to select one of the candidate depth coordinates at each pixel, and to output a depth map of the object comprising the selected one of the candidate depth coordinates at each pixel.
-
37. A method for depth mapping, comprising:
-
capturing first and second images of an object using first and second image capture subassemblies, respectively; comparing the first and second images in order to estimate a misalignment between the first and second image capture subassemblies; processing the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data with respect to the object; and outputting a depth map comprising the stereoscopic depth data. - View Dependent Claims (38, 39, 40, 41)
-
-
42. Apparatus for depth mapping, comprising:
-
first and second image capture subassemblies, which are configured to capture respective first and second images of an object; and a processor, which is configured to compare the first and second images in order to estimate a misalignment between the first and second image capture subassemblies, to process the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data with respect to the object, and to output a depth map comprising the stereoscopic depth data. - View Dependent Claims (43, 44, 45, 46)
-
Specification