Machine vision stereo matching
First Claim
1. A method of matching corresponding features in at least two different images representing a scene, the method comprising:
- (a) identifying in each image plural edge points which together form a continuous edge segment in the image;
(b) identifying potentially matching edge points in said at least two images by application of predetermined matching constraints to the plural edge points identified in each image;
(c) constructing for each edge segment in one image a layered network of potentially matching edge segments in the or each other image, said layered network identifying for each edge point in the continuous edge segments of said one image potentially matching edge points in the or each other image; and
(d) identifying the best match between segments on the basis of a shortest path analysis of the layered network.
2 Assignments
0 Petitions
Accused Products
Abstract
Corresponding points in at least two different images of a scene are matched by the use of a shortest path analysis. An edge point in a continuous edge segment is selected in one image and potentially matching edge points in other images are arranged as a layered network. Weightings are applied to the potential matches and the best match is identified on the basis of a shortest path analysis of the network. In one preferred arrangement epipolar lines are used. For each edge point to be matched the position of the corresponding epipolar line is calculated using matrices defining parameters associated with the capturing of the image. The epipolar line is used to identify in the other image points as potential matches to the selected point in the one image. Further constraints are applied to the identified points to find the one point in the other image that most likely matches the selected point.
108 Citations
19 Claims
-
1. A method of matching corresponding features in at least two different images representing a scene, the method comprising:
-
(a) identifying in each image plural edge points which together form a continuous edge segment in the image; (b) identifying potentially matching edge points in said at least two images by application of predetermined matching constraints to the plural edge points identified in each image; (c) constructing for each edge segment in one image a layered network of potentially matching edge segments in the or each other image, said layered network identifying for each edge point in the continuous edge segments of said one image potentially matching edge points in the or each other image; and (d) identifying the best match between segments on the basis of a shortest path analysis of the layered network.
-
-
2. A method as claimed in claim 1, wherein, for each edge point in the one image, potentially matching edge points are identified as those edge points lying along a corresponding epipolar band in the or each other image.
-
3. A method as claimed in claim 2, wherein potentially matching edge points are identified as those edge points lying along a corresponding epipolar line in the or each other image.
-
4. A method as claimed in claim 3, further comprising identifying in a third image a first epipolar line corresponding to the edge point in the one image;
- identifying a second epipolar line corresponding to the epipolar line in the said other image; and
identifying the point at which the first and second epipolar lines intersect as a potentially matching edge point.
- identifying a second epipolar line corresponding to the epipolar line in the said other image; and
-
5. A method as claimed in any one of claims 2 to 4, wherein further matching constraints are applied to the points identified as potential matches in order to facilitate the identification of potential matches in the or each other image to the point in the one image.
-
6. A method as claimed in claim 5, wherein the further constraints include comparing at least one of the local edge orientation, the intensity normal and the surrounding color of the edge points in the one image and the potential matches in the or each other image.
-
7. A method as claimed in claim 6, wherein the comparison of the surrounding color comprises comparing the color vector of each edge point in the one image with the color vector of potentially matching points in the other image.
-
8. A method as claimed in claim 7, wherein the color vector is determined by applying an orientation mask to image points in the vicinity of the point in each image under comparison.
-
9. A method as claimed in claim 6, wherein the intensity normal is calculated from a gradient operator.
-
10. A method as claimed in claim 1, wherein images are captured by camera, each image having an associated camera matrix defining parameters related to the capturing of the image.
-
11. A method as claimed in claim 10, wherein the camera matrix is calculated for each captured image after an initial calibration of the camera.
-
12. A method as claimed in claim 1, wherein the shortest path analysis applies to each potential matching edge point a weighting representing the likelihood of the potential point matching the selected point and selects as the best match to the selected point the point which results in a minimal weighting.
-
13. A method as claimed in claim 12, wherein the weighting that is applied is dependent on the result of the application of said predetermined matching constraints to the plural edge points identified in each image.
-
14. A system (40) for matching corresponding features in at least two different inages representing a scene, the system comprising:
-
(a) means (41) for identifying in each image plural edge points which together form a continuous edge segment in the image; (b) means (42) for identifying potentially matching edge points in said at least two images by application of predetermined matching constraints to the plural edge points identified in each image; (c) means (43) for constructing for each edge segment in one image a layered network of potentially matching edge segments in the or each other image, said layered network identifying for each edge point in the continuous edge segments of one image potentially matching edge points in the or each other image; and (d) means (44) for identifying the best match between segments on the basis of a shortest path analysis of the layered network.
-
-
15. A system (40) as claimed in claim 14, further comprising means 45 for identifying in each other image epipolar lines corresponding to each edge point in the one image.
-
16. A system as claimed in claim 15, further comprising means (46) for comparing at least one of the local edge orientation, the intensity normal and the surrounding color of the edge points in the one image and the potential matches in the or each other image.
-
17. A system (40) as claimed in any one of claims 14 to 16, further comprising a camera (47) for capturing said images and matrix defining means (48) for defining a matrix defining parameters relating to the capturing of the image.
-
18. A system (40) as claimed in claim 14, wherein the shortest path analysis applies to each potential matching edge point a weighting representing the likelihood of the potential point matching the selected point and selects as the best match to the selected point the point which results in a minimal weighting.
-
19. A system (40) as claimed in claim 18, wherein the weighting that is applied is dependent on the result of the application of said predetermined matching constraints to the plural edge points identified in each image.
Specification