Locating object using stereo vision
First Claim
Patent Images
1. A method comprising:
- computing a first relative position of the first object with respect to a second object in a first scene using a current first image of the first object and a second image of the second object provided by at least first and second image sensors;
matching the current first image in the first scene with a next first image of the first object in a second scene, the second scene containing a third object having a third image; and
computing a second relative position of the third object with respect to the second object in the second scene using the third image and the next first image.
0 Assignments
0 Petitions
Accused Products
Abstract
One embodiment of the invention determines position of an object with respect to an original location. A first relative position of the first object with respect to a second object in a first scene is computed using a current first image of the first object and a second image of the second object provided by at least first and second image sensors. The current first image in the first scene is matched with a next first image of the first object in a second scene. The second scene contains a third object having a third image. A second relative position of the third object with respect to the second object in the second scene is computed using the third image and the next first image.
60 Citations
20 Claims
-
1. A method comprising:
-
computing a first relative position of the first object with respect to a second object in a first scene using a current first image of the first object and a second image of the second object provided by at least first and second image sensors;
matching the current first image in the first scene with a next first image of the first object in a second scene, the second scene containing a third object having a third image; and
computing a second relative position of the third object with respect to the second object in the second scene using the third image and the next first image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 20)
computing a third relative position of the third object with respect to the first object using the first and second relative positions.
-
-
3. The method of claim 1 wherein computing the first relative position comprises:
-
determining a first correspondence between the current first image and the second image; and
computing a current first depth and a second depth of the first and second objects, respectively, using the first correspondence.
-
-
4. The method of claim 1 wherein matching comprises:
-
determining a current region around the current first image in the first scene;
determining a plurality of candidate objects in the second scene;
determining a plurality of candidate regions around the candidate images of candidate objects;
computing similarity measures between the current region and the candidate regions; and
selecting the next first image to have highest similarity measure.
-
-
5. The method of claim 1 wherein computing the second relative position comprises:
-
determining a second correspondence between the third image and the next first image; and
computing a next first depth and a third depth of the second and third objects, respectively, using the second correspondence.
-
-
6. The method of claim 1 further comprises:
refining the first and second relative positions using at least one of a rigidity constraint and a trajectory constraint imposed on the first, second, and third objects.
-
7. The method of claim 3 wherein determining the first correspondence comprises:
-
pre-processing left and right first scene images of the first scene, the left and right first scene images being provided by first and second image sensors;
extracting a left current first image and a right current first image from the left and right first scene images, the left and right current first images corresponding to the current first image of the current first object;
extracting a left second image and a right second image from the left and right first scene images, the left and right second images corresponding to the second image of the second object; and
matching the left current first image and the left second image to the right current first image and the right second image, respectively.
-
-
8. The method of claim 5 wherein determining the second correspondence comprises:
-
pre-processing left and right second scene images of the second scene, the left and right second scene images being provided by first and second image sensors;
extracting a left next first image and a right next first image from the left and right second scene images, the left and right next first images corresponding to the next first image of the next first object;
extracting a left third image and a right third image from the left and right second scene images, the left and right third images corresponding to the third image of the third object; and
matching the left next first image and the left third image to the right next first image and the right third image, respectively.
-
-
9. The method of claim 7 wherein computing a current first depth and a second depth comprises:
-
determining at least one of a left first horizontal position, a right first horizontal position, a left first vertical position, and a right first vertical position of the left current first image and a right current first image;
determining at least one of a left second horizontal position, a right second horizontal position, a left second vertical position, and a right second vertical position of the left second image and the right second image;
calculating the current first depth using a sensor distance between the first and second image sensors, focal lengths of the image sensors, and the at least one of the left first horizontal position, the right first horizontal position, the left first vertical position, and the right first vertical position; and
calculating the second depth using the sensor distance, the focal lengths of the image sensors, and the at least one of the left second horizontal position, the right second horizontal position, the left second vertical position, and the right second vertical position.
-
-
10. The method of claim 8 wherein computing the next first depth and the third depth comprises:
-
determining at least one of a left first horizontal position, a right first horizontal position, a left first vertical position, and a right first vertical position of the left next first image and a right next first image;
determining at least one of a left third horizontal position, a right third horizontal position, a left third vertical position, and a right third vertical position of the left third image and the right third image;
calculating the next first depth using a sensor distance between the first and second image sensors, focal lengths of the image sensors, and the at least one of the left first horizontal position, the right first horizontal position, the left first vertical position, and the right first vertical position; and
calculating the third depth using the sensor distance, the focal lengths of the image sensors, and the at least one of the left third horizontal position, the right third horizontal position, the left third vertical position, and the right third vertical position.
-
-
20. The article of manufacture of claim 8 wherein the data causing the machine to perform computing the next first depth and the third depth comprises data that, when accessed by the machine, causes the machine to perform operations comprising:
-
determining at least one of a left first horizontal position, a right first horizontal position, a left first vertical position, and a right first vertical position of the left next first image and a right next first image;
determining at least one of a left third horizontal position, a right third horizontal position, a left third vertical position, and a right third vertical position of the left third image and the right third image;
calculating the next first depth using a sensor distance between the first and second image sensors, focal lengths of the image sensors, and the at least one of the left first horizontal position, the right first horizontal position, the left first vertical position, and the right first vertical position; and
calculating the third depth using the sensor distance, the focal lengths of the image sensors, and the at least one of the left third horizontal position, the right third horizontal position, the left third vertical position, and the right third vertical position.
-
-
11. An article of manufacture comprising:
-
a machine-accessible medium including data that, when accessed by a machine, causes the machine to perform operations comprising;
computing a first relative position of the first object with respect to a second object in a first scene using a current first image of the first object and a second image of the second object provided by at least first and second image sensors;
matching the current first image in the first scene with a next first image of the first object in a second scene, the second scene containing a third object having a third image; and
computing a second relative position of the third object with respect to the second object in the second scene using the third image and the next first image. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19)
computing a third relative position of the third object with respect to the first object using the first and second relative positions.
-
-
13. The article of manufacture of claim 11 wherein the data causing the machine to perform computing the first relative position comprises data that, when accessed by the machine, causes the machine to perform operations comprising:
-
determining a first correspondence between the current first image and the second image; and
computing a current first depth and a second depth of the first and second objects, respectively, using the first correspondence.
-
-
14. The article of manufacture of claim 11 wherein the data causing the machine to perform matching comprises data that, when accessed by the machine, causes the machine to perform operations comprising:
-
determining a current region around the current first image in the first scene;
determining a plurality of candidate objects in the second scene;
determining a plurality of candidate regions around the candidate images of candidate objects;
computing similarity measures between the current region and the candidate regions; and
selecting the next first image to have highest similarity measure.
-
-
15. The article of manufacture of claim 11 wherein the data causing the machine to perform computing the second relative position comprises data that, when accessed by the machine, causes the machine to perform operations comprising:
-
determining a second correspondence between the third image and the next first image; and
computing a next first depth and a third depth of the second and third objects, respectively, using the second correspondence.
-
-
16. The article of manufacture of claim 11 wherein the data further comprises data that causes the machine to perform operations comprising:
refining the first and second relative positions using at least one of a rigidity constraint and a trajectory constraint imposed on the first, second, and third objects.
-
17. The article of manufacture of claim 13 wherein the data causing the machine to perform determining the first correspondence comprises data that, when accessed by the machine, causes the machine to perform operations comprising:
-
pre-processing left and right first scene images of the first scene, the left and right first scene images being provided by first and second image sensors;
extracting a left current first image and a right current first image from the left and right first scene images, the left and right current first images corresponding to the current first image of the current first object;
extracting a left second image and a right second image from the left and right first scene images, the left and right second images corresponding to the second image of the second object; and
matching the left current first image and the left second image to the right current first image and the right second image, respectively.
-
-
18. The article of manufacture of claim 15 wherein the data causing the machine to perform determining the second correspondence comprises data that, when accessed by the machine, causes the machine to perform operations comprising:
-
pre-processing left and right second scene images of the second scene, the left and right second scene images being provided by first and second image sensors;
extracting a left next first image and a right next first image from the left and right second scene images, the left and right next first images corresponding to the next first image of the next first object;
extracting a left third image and a right third image from the left and right second scene images, the left and right third images corresponding to the third image of the third object; and
matching the left next first image and the left third image to the right next first image and the right third image, respectively.
-
-
19. The article of manufacture of claim 17 wherein the data causing the machine to perform computing a current first depth and a second depth comprises data that, when accessed by the machine, causes the machine to perform operations comprising:
-
determining at least one of a left first horizontal position, a right first horizontal position, a left first vertical position, and a right first vertical position of the left current first image and a right current first image;
determining at least one of a left second horizontal position, a right second horizontal position, a left second vertical position, and a right second vertical position of the left second image and the right second image;
calculating the current first depth using a sensor distance between the first and second image sensors, focal lengths of the image sensors, and the at least one of the left first horizontal position, the right first horizontal position, the left first vertical position, and the right first vertical position; and
calculating the second depth using the sensor distance, the focal lengths of the image sensors, and the at least one of the left second horizontal position, the right second horizontal position, the left second vertical position, and the right second vertical position.
-
Specification