Optical 3D digitizer with enlarged non-ambiguity zone
First Claim
1. An optical 3D digitizer with an enlarged non-ambiguity zone, comprising:
- at least one structured light projector for projecting a fringe pattern over a target area, the fringe pattern having a shiftable position over the target area;
a first camera directed toward the target area and positioned with respect to said at least one structured light projector to define a first triangulation plane therewith;
a second camera directed toward the target area and positioned with respect to said at least one structured light projector to define a second triangulation plane therewith, the second triangulation plane being distinct from the first triangulation plane, the first and second cameras having at least partially overlapping measurement fields, the second camera having a larger non-ambiguity depth than the first camera; and
a computer means connected to the cameras, for performing an image processing of images captured by the cameras, the image processing including evaluating a same set of camera-projector related functions from images including the pattern projected by said at least one structured light projector at shifted positions as captured by the cameras, building a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera, building a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera, determining chromatic texture from the images captured by the cameras, and building a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
3 Assignments
0 Petitions
Accused Products
Abstract
An optical 3D digitizer with an enlarged non-ambiguity zone, comprising a structured light projector for projecting a fringe pattern over a target area, the fringe pattern having a shiftable position over the target area is disclosed. First and second cameras having overlapping measurement fields are directed toward the target area and positioned with respect to the projector to define distinct triangulation planes therewith. The second camera has a larger non-ambiguity depth than the first camera. A computer evaluates a same set of camera-projector related functions from images captured by the cameras including the projected pattern at shifted positions, builds low depth resolution and degenerated 3D models from the camera-projector related functions evaluated with respect to the second and first cameras respectively, determines chromatic texture from the images, and builds a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
-
Citations
27 Claims
-
1. An optical 3D digitizer with an enlarged non-ambiguity zone, comprising:
-
at least one structured light projector for projecting a fringe pattern over a target area, the fringe pattern having a shiftable position over the target area;
a first camera directed toward the target area and positioned with respect to said at least one structured light projector to define a first triangulation plane therewith;
a second camera directed toward the target area and positioned with respect to said at least one structured light projector to define a second triangulation plane therewith, the second triangulation plane being distinct from the first triangulation plane, the first and second cameras having at least partially overlapping measurement fields, the second camera having a larger non-ambiguity depth than the first camera; and
a computer means connected to the cameras, for performing an image processing of images captured by the cameras, the image processing including evaluating a same set of camera-projector related functions from images including the pattern projected by said at least one structured light projector at shifted positions as captured by the cameras, building a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera, building a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera, determining chromatic texture from the images captured by the cameras, and building a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. An optical 3D digitizing method, comprising:
-
controllably projecting a fringe pattern over a target area using at least one structured light projector, the fringe pattern having a shiftable position over the target area;
positioning a first camera directed toward the target area with respect to said at least one structured light projector to define a first triangulation plane therewith;
positioning a second camera directed toward the target area with respect to said at least one structured light projector to define a second triangulation plane therewith, the second triangulation plane being distinct from the first triangulation plane, the first and second cameras having at least partially overlapping measurement fields, the second camera having a larger non-ambiguity depth than the first camera; and
performing an image processing of images captured by the cameras, the image processing including evaluating a same set of camera-projector related functions from images including the pattern projected by said at least one structured light projector at shifted positions as captured by the cameras, building a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera, building a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera, determining chromatic texture from the images captured by the cameras, and building a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A computer apparatus for performing an image processing of images captured by first and second cameras, the second camera having a larger non-ambiguity depth than the first camera, comprising:
-
means for evaluating a same set of camera-projector related functions from images captured by the cameras, at least some of the images including a pattern projected at shifted positions;
means for building a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera;
means for building a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera;
means for determining chromatic texture from the images captured by the cameras; and
means for building a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
-
-
22. A computer readable medium having recorded thereon statements and instructions for execution by a computer to perform an image processing of images captured by first and second cameras directed toward a target area, the second camera having a larger non-ambiguity depth than the first camera, the image processing including evaluating a same set of camera-projector related functions from the images captured by the cameras, at least some of the images including a pattern projected at shifted positions, building a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera, building a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera, determining chromatic texture from the images captured by the cameras, and building a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
-
23. A computer program product, comprising a memory having computer readable code embodied therein, for execution by a CPU, for performing an image processing of images captured by first and second cameras directed toward a target area, the second camera having a larger non-ambiguity depth than the first camera, said code comprising:
-
code means for evaluating a same set of camera-projector related functions from the images captured by the cameras, at least some of the images including a pattern projected at shifted positions;
code means for building a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera;
code means for building a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera;
code means for determining chromatic texture from the images captured by the cameras; and
code means for building a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
-
-
24. A carrier wave embodying a computer data signal representing sequences of statements and instructions which, when executed by a processor, cause the processor to perform an image processing of images captured by first and second cameras directed toward a target area, the second camera having a larger non-ambiguity depth than the first camera, the statements and instructions comprising:
-
evaluating a same set of camera-projector related functions from the images captured by the cameras, at least some of the images including a pattern projected at shifted positions;
building a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera;
building a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera;
determining chromatic texture from the images captured by the cameras; and
building a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
-
-
25. An optical 3D digitizing method, comprising:
-
controllably projecting a fringe pattern having a shiftable position over a target area;
capturing images obtained by high depth resolution sensing and low depth resolution sensing from respective measurement fields at least partially overlapping each other over the target area;
determining absolute pixel 3D positions in the images obtained by low depth resolution sensing and high depth resolution sensing as a function of relations depending on the fringe pattern in the captured images and correspondence between the absolute pixel 3D positions in the images;
extracting chromatic texture from the captured images; and
building a complete textured 3D model from the absolute pixel 3D positions and the chromatic texture.
-
-
26. An optical 3D digitizer with an enlarged non-ambiguity zone, comprising:
-
at least one structured light projector projecting a fringe pattern over a target area, the fringe pattern having a shiftable position over the target area;
a first camera directed toward the target area and positioned with respect to said at least one structured light projector to define a first triangulation plane therewith;
a second camera directed toward the target area and positioned with respect to said at least one structured light projector to define a second triangulation plane therewith, the second triangulation plane being distinct from the first triangulation plane, the first and second cameras having at least partially overlapping measurement fields, the second camera having a larger non-ambiguity depth than the first camera; and
a computing device, connected to the cameras, being programmatically configured to i) evaluate a same set of camera-projector related functions from images including the pattern projected by said at least one structured light projector at shifted positions as captured by the cameras, ii) build a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera, iii) build a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera, iv) determine chromatic texture from the images captured by the cameras, and v) build a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
-
-
27. A computing device for performing image processing of images captured by first and second cameras, the second camera having a larger non-ambiguity depth than the first camera, wherein the computing device includes a software program which when executed in the computing device i) evaluates a same set of camera-projector related functions from images captured by the cameras, at least some of the images including a pattern projected at shifted positions, ii) builds a low depth resolution 3D model from the camera-projector related functions evaluated with respect to the second camera, iii) builds a degenerated 3D model from the camera-projector related functions evaluated with respect to the first camera, iv) determines chromatic texture from the images captured by the cameras, and v) builds a complete textured 3D model from data corresponding between the low depth resolution and degenerated 3D models within a tolerance range.
Specification