Method and device for stereoscopic vision
First Claim
1. A stereoscopic vision device, comprising:
- a first near infrared (NIR) image capture device and a second NIR image capture device each configured to capture a NIR light pattern formed in an environment of the devices by a light source configured to emit near infrared (NIR) light at a multiplicity of directions, each of the first NIR image capture device and the second NIR image capture device comprising a filter configured to transmit at least 75 percent of the NIR light in a NIR wavelength of at least 850 nanometers, and to transmit at least part of and at most 50 percent of visible light; and
a processor configured to;
determine depth information of the environment from a first image captured by the first NIR image capture device and a second image captured by the second NIR image capture device, based on the NIR light pattern depicted in the first image and the second image, wherein the depth information is determined by comparing locations of points in the NIR light pattern as captured in the first image and in the second image; and
determine color information of the environment based on the transmitted visible light,wherein the filter is configured to transmit between 10 and 40 percent of the visible light at a wavelength range of 400-650 nanometers.
2 Assignments
0 Petitions
Accused Products
Abstract
A stereoscopic vision device, a method and a robot, the stereoscopic vision device comprising: a first image capture device and a second image capture device configured to capture a light pattern formed in an environment of the device by a light source configured to emit NIR light at a multiplicity of directions, each of the first image capture device and the second image capture device comprising a filter configured to transmit at least 75 percent of the NIR light, and to transmit at most 50 percent of visible light; and a processor configured to: determine depth information of the environment from a first image captured by the first image capture device and a second image captured by the second image capture device, based on the light pattern as depicted in the first image and the second image; and determine color information of the environment the first image and the second image.
20 Citations
15 Claims
-
1. A stereoscopic vision device, comprising:
-
a first near infrared (NIR) image capture device and a second NIR image capture device each configured to capture a NIR light pattern formed in an environment of the devices by a light source configured to emit near infrared (NIR) light at a multiplicity of directions, each of the first NIR image capture device and the second NIR image capture device comprising a filter configured to transmit at least 75 percent of the NIR light in a NIR wavelength of at least 850 nanometers, and to transmit at least part of and at most 50 percent of visible light; and a processor configured to; determine depth information of the environment from a first image captured by the first NIR image capture device and a second image captured by the second NIR image capture device, based on the NIR light pattern depicted in the first image and the second image, wherein the depth information is determined by comparing locations of points in the NIR light pattern as captured in the first image and in the second image; and determine color information of the environment based on the transmitted visible light, wherein the filter is configured to transmit between 10 and 40 percent of the visible light at a wavelength range of 400-650 nanometers. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method for obtaining stereoscopic data of an environment, comprising:
-
emitting near infrared (NIR) light at a multiplicity of directions, thus forming a NIR light pattern in the environment; obtaining a first image captured by a first NIR image capture device and a second image captured by a second NIR capture device, the first image and the second image each depicting at least a part of the NIR light pattern, wherein each of the first NIR image capture device and the second NIR image capture device capture the first image and the second image, respectively, through a filter configured to transmit at least 75 percent of the NIR light, and to transmit at least part of and at most 50 percent of visible light; identifying the NIR light pattern depicted in each of the first image and the second image; registering the first image and the second image in accordance with the NIR light pattern; determining depth information for each light point comprised in the NIR light pattern depicted in the registered first and second images, wherein the depth information is determined by comparing locations of points in the NIR light pattern as depicted in the first image and in the second image; and determining color information of the environment based on the transmitted visible light, wherein the filter is configured to transmit between 10 and 40 percent of the visible light at a wavelength range of 400-650 nanometers. - View Dependent Claims (9, 10, 11, 12)
-
-
13. A robot comprising:
-
a light source configured to emit a pattern of near infrared (NIR) light at a multiplicity of directions, thus forming a NIR light pattern in an environment of the robot; a first NIR image capture device and a second NIR image capture device, each comprising a filter configured to transmit at least 75 percent of the NIR light, and to transmit at least part of and at most 50 percent of visible light, the first NIR image capture device and the second NIR image capture device configured to each capture at least a part of the NIR light pattern, wherein the filter is configured to transmit between 10 and 40 percent of the visible light at a wavelength range of 400-650 nanometers; and a processor configured to; determine depth information of the environment from a first image captured by the first NIR image capture device and a second image captured by the second NIR image capture device, based on the at least part of the NIR light pattern as depicted in each of the first image and the second image, wherein the depth information is determined by comparing locations of points in the NIR light pattern as depicted in the first image and in the second image; and determine color information of at least one object from the first image and the second image based on the transmitted visible light; a steering mechanism or changing a position of the robot in accordance with the at least one object; and a motor for activating the steering mechanism. - View Dependent Claims (14, 15)
-
Specification