Depth mapping based on pattern matching and stereoscopic information
First Claim
Patent Images
1. A method for depth mapping, comprising:
- projecting a pattern of optical radiation onto an object;
capturing a first image of the pattern on the object using a first image sensor, and processing the first image alone to generate pattern-based depth data with respect to the object;
capturing a second image of the object using a second image sensor, wherein the projected pattern does not appear in the second image, and processing the second image together with the first image to generate stereoscopic depth data with respect to the object; and
combining the pattern-based depth data with the stereoscopic depth data to create a depth map of the object,wherein combining the pattern-based depth data with the stereoscopic depth data comprises computing respective measures of confidence associated with the pattern-based depth data and stereoscopic depth data, and selecting depth coordinates from among the pattern-based and stereoscopic depth data responsively to the respective measures of confidence.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.
167 Citations
21 Claims
-
1. A method for depth mapping, comprising:
-
projecting a pattern of optical radiation onto an object; capturing a first image of the pattern on the object using a first image sensor, and processing the first image alone to generate pattern-based depth data with respect to the object; capturing a second image of the object using a second image sensor, wherein the projected pattern does not appear in the second image, and processing the second image together with the first image to generate stereoscopic depth data with respect to the object; and combining the pattern-based depth data with the stereoscopic depth data to create a depth map of the object, wherein combining the pattern-based depth data with the stereoscopic depth data comprises computing respective measures of confidence associated with the pattern-based depth data and stereoscopic depth data, and selecting depth coordinates from among the pattern-based and stereoscopic depth data responsively to the respective measures of confidence.
-
-
2. A method for depth mapping, comprising:
-
projecting a pattern of optical radiation onto an object; capturing a first image of the pattern on the object using a first image sensor, and processing the first image alone to generate pattern-based depth data with respect to the object; capturing a second image of the object using a second image sensor, wherein the projected pattern does not appear in the second image, and processing the second image together with the first image to generate stereoscopic depth data with respect to the object; and combining the pattern-based depth data with the stereoscopic depth data to create a depth map of the object, wherein combining the pattern-based depth data with the stereoscopic depth data comprises defining multiple candidate depth coordinates for each of a plurality of pixels in the depth map, and selecting one of the candidate depth coordinates at each pixel for inclusion in the depth map. - View Dependent Claims (3, 4)
-
-
5. A method for depth mapping, comprising:
-
receiving at least one image of an object, captured by an image sensor, the image comprising multiple pixels; processing the at least one image to generate depth data comprising multiple candidate depth coordinates and respective measures of confidence associated with the candidate depth coordinates for each of a plurality of the pixels; applying a weighted voting process to the depth data, wherein votes for the candidate depth coordinates are weighted responsively to the respective measures of confidence, in order to select one of the candidate depth coordinates at each pixel; and outputting a depth map of the object comprising the selected one of the candidate depth coordinates at each pixel. - View Dependent Claims (6, 7, 8)
-
-
9. Apparatus for depth mapping, comprising:
-
an illumination subassembly, which is configured to project a pattern of optical radiation onto an object; a first image sensor, which is configured to capture a first image of the pattern on the object; at least a second image sensor, which is configured to capture at least a second image of the object, wherein the projected pattern does not appear in the second image and a processor, which is configured to process the first image alone to generate pattern-based depth data with respect to the object, to process a pair of images including the first image and the second image to generate stereoscopic depth data with respect to the object, and to combine the pattern-based depth data with the stereoscopic depth data to create a depth map of the object, wherein the processor is configured to associate respective measures of confidence with the pattern-based depth data and stereoscopic depth data, and to select depth coordinates from among the pattern-based and stereoscopic depth data responsively to the respective measures of confidence.
-
-
10. Apparatus for depth mapping, comprising:
-
an illumination subassembly, which is configured to project a pattern of optical radiation onto an object; a first image sensor, which is configured to capture a first image of the pattern on the object; at least a second image sensor, which is configured to capture at least a second image of the object, wherein the projected pattern does not appear in the second image; and a processor, which is configured to process the first image alone to generate pattern-based depth data with respect to the object, to process a pair of images including the first image and the second image to generate stereoscopic depth data with respect to the object, and to combine the pattern-based depth data with the stereoscopic depth data to create a depth map of the object, wherein the processor is configured to define multiple candidate depth coordinates for each of a plurality of pixels in the depth map, and to select one of the candidate depth coordinates at each pixel for inclusion in the depth map. - View Dependent Claims (11, 12)
-
-
13. Apparatus for depth mapping, comprising:
-
at least one image sensor, which is configured to capture at least one image of an object, the image comprising multiple pixels; and a processor, which is configured to process the at least one image to generate depth data comprising multiple candidate depth coordinates and respective measures of confidence associated with the candidate depth coordinates for each of a plurality of the pixels, to apply a weighted voting process to the depth data, wherein votes for the candidate depth coordinates are weighted responsively to the respective measures of confidence, in order to select one of the candidate depth coordinates at each pixel, and to output a depth map of the object comprising the selected one of the candidate depth coordinates at each pixel. - View Dependent Claims (14, 15, 16)
-
-
17. A computer software product, comprising a non-transitory computer-readable medium in which program instructions are stored, which instructions, when read by a processor, cause the processor to receive at least one image of an object, the image comprising multiple pixels, to process the at least one image to generate depth data comprising multiple candidate depth coordinates and respective measures of confidence associated with the candidate depth coordinates for each of a plurality of the pixels, to apply a weighted voting process to the depth data, wherein votes for the candidate depth coordinates are weighted responsively to the respective measures of confidence, in order to select one of the candidate depth coordinates at each pixel, and to output a depth map of the object comprising the selected one of the candidate depth coordinates at each pixel.
-
18. A method for depth mapping, comprising:
-
capturing first and second images of an object using first and second image capture subassemblies, respectively; comparing the first and second images in order to estimate a misalignment between the first and second image capture subassemblies; processing the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data with respect to the object; and outputting a depth map comprising the stereoscopic depth data, wherein comparing the first and second images comprises selecting pixels in a first depth map responsively to the depth data, collecting statistics with respect to the selected pixels in subsequent images captured by the first and second image capture subassemblies, and applying the statistics in updating the estimate of the misalignment for use creating a second, subsequent depth map.
-
-
19. A method for depth mapping, comprising:
-
capturing first and second images of an object using first and second image capture subassemblies, respectively; comparing the first and second images in order to estimate a misalignment between the first and second image capture subassemblies; processing the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data with respect to the object; and outputting a depth map comprising the stereoscopic depth data, wherein comparing the first and second images comprises estimating a shift between the first and second images, and wherein correcting the misalignment comprises applying corrected shift values Xnom in generating the depth data, incorporating a correction dxnom given by a formula;
-
-
20. Apparatus for depth mapping, comprising:
-
first and second image capture subassemblies, which are configured to capture respective first and second images of an object; and a processor, which is configured to compare the first and second images in order to estimate a misalignment between the first and second image capture subassemblies, to process the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data with respect to the object, and to output a depth map comprising the stereoscopic depth data, wherein the processor is configured to select pixels in a first depth map responsively to the depth data, to collect statistics with respect to the selected pixels in subsequent images captured by the first and second image capture subassemblies, and to apply the statistics in updating the estimate of the misalignment for use creating a second, subsequent depth map.
-
-
21. Apparatus for depth mapping, comprising:
-
first and second image capture subassemblies, which are configured to capture respective first and second images of an object; and a processor, which is configured to compare the first and second images in order to estimate a misalignment between the first and second image capture subassemblies, to process the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data with respect to the object, and to output a depth map comprising the stereoscopic depth data, wherein the misalignment estimated by the processor comprises a shift between the first and second images, and wherein the processor is configured to apply corrected shift values xnom in generating the depth data, incorporating a correction dxnom given by a formula;
-
Specification