Depth sensing camera systems and methods
First Claim
1. A method for deriving depth information associated with pixels of an image sensed by a digital camera, each pixel corresponding to a surface element of a subject scene, the method comprising the steps of:
- (a) generating a first intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a first light source positioned in a first known geometrical relation to the digital camera;
(b) generating a second intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a second light source positioned in a second known geometrical relation to the digital camera;
(c) generating a third intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a third light source positioned in a third known geometrical relation to the digital camera;
(d) generating a fourth intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a fourth light source positioned in a fourth known geometrical relation to the digital camera; and
(e) processing the measured intensity values of corresponding pixels of said first, second, third and fourth intensity maps to derive information relating to the depth of the corresponding surface element of the subject scene, said processing being based upon an inverse square relation between distance and intensity.
0 Assignments
0 Petitions
Accused Products
Abstract
This invention relates to a method and apparatus for sensing a three-dimensional (depth and illuminance) image of a scene. It is based on the inverse-square law relating the incident brightness on an area illuminated by a light point source, to its distance from the point source. In the preferred embodiment of the invention the scene is sequentially illuminated by more than one light point source each at a pre calibrated location in the reference coordinate system. The resulting reflections from the field of-view are sensed by a stationary digital camera that maps each scene element into a corresponding image pixel, to provide a 2-dimensional brightness map that contains the photometric values of each image pixel for each specific illumination. Each pixel photometric value depends on the illumination incident on the corresponding scene element which, in itself, is further determined by the element inherent Lambertian reflectance-coefficient at the illumination wavelength, the element orientation relative to the coordinate system, and the element illuminance as determined by the point source brightness and the distance separating the point source and the scene element. Each brightness map is different from its sequel due to the differing point-source locations. By manipulating the brightness maps the spatial location of each scene element relative to the fixed point sources is determined, thus yielding a depth-image as well as a brightness-image.
152 Citations
13 Claims
-
1. A method for deriving depth information associated with pixels of an image sensed by a digital camera, each pixel corresponding to a surface element of a subject scene, the method comprising the steps of:
-
(a) generating a first intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a first light source positioned in a first known geometrical relation to the digital camera; (b) generating a second intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a second light source positioned in a second known geometrical relation to the digital camera; (c) generating a third intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a third light source positioned in a third known geometrical relation to the digital camera; (d) generating a fourth intensity map of the subject scene while substantially the entirety of the subject scene is illuminated by a fourth light source positioned in a fourth known geometrical relation to the digital camera; and (e) processing the measured intensity values of corresponding pixels of said first, second, third and fourth intensity maps to derive information relating to the depth of the corresponding surface element of the subject scene, said processing being based upon an inverse square relation between distance and intensity. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A depth sensing camera system for generating three-dimensional information about an object scene, the system comprising:
-
(a) a digital camera having a field of view including the object scene; (b) a first light source positioned in a first known geometrical relation to said digital camera for illuminating substantially the entirety of the object scene; (c) a second light source positioned in a second known geometrical relation to said digital camera for illuminating substantially the entirety of the object scene; (d) a third light source positioned in a third known geometrical relation to said digital camera for illuminating substantially the entirety of the object scene; (e) a fourth light source positioned in a fourth known geometrical relation to said digital camera for illuminating substantially the entirety of the object scene; (f) a control system for sequentially activating each of said light sources such that said digital camera generates a first image of the object scene illuminated by said first light source, a second image of the object scene illuminated by said second light source, a third image of the object scene illuminated by said third light source, and a fourth image of the object scene illuminated by said fourth light source; and (g) a processor configured to process said first, second, third and fourth images based upon an inverse square relation between distance and intensity to derive depth information relating to at least some points of the object scene. - View Dependent Claims (9, 10, 11, 12, 13)
-
Specification