×

Context and epsilon stereo constrained correspondence matching

  • US 8,773,513 B2
  • Filed: 07/01/2011
  • Issued: 07/08/2014
  • Est. Priority Date: 07/01/2011
  • Status: Active Grant
First Claim
Patent Images

1. A method of matching first pixels in a first image of a 3D scene to corresponding second pixels in a second image of the same 3D scene, said method comprising:

  • (a) obtaining said first image and said second image, wherein said first image is a first multi-perspective image and said second image is a second multi-perspective image;

    (b) defining an index of matched pixel pairs;

    (c) identifying a plurality of target pixels in said first image to be matched to pixels in said second image;

    (d) for each identified target pixel;

    (i) determining its potential corresponding pixel in said second image;

    (ii) determining a vertical parallax in the second image for the identified target pixel, said vertical parallax being distinct from any horizontal parallax;

    (iii) determining the minimum distance from said potential corresponding pixel to said vertical parallax, and(iv) IF said minimum distance is not greater than a predefined maximum distance, THEN deeming said potential corresponding pixel to be a true match for said identified target pixel and adding the pixel pair comprised of said potential corresponding pixel and said identified target pixel to said index of matched pixel pairs, ELSE deeming said potential corresponding pixel to not be a match for said identified target pixel and omitting said target pixel and said potential corresponding pixel from the index of matched pairs,wherein in step (c), said plurality of target pixels are edge pixels identified by application of an edge detection algorithm, andstep (c) further includes applying a feature based correspondence matching algorithm to said first and second images to render a collection of feature point pairs, each feature point pair including a first feature point in said first image and a corresponding second feature point in said second image; and

    step (i) includes;

    (I) identifying N first feature points nearest to a current target pixel, wherein N is a fixed, predefined number;

    (II) defining a rigid transform T(.) for the current target pixel using the identified N first feature points;

    (III) fitting the rigid transform to the corresponding N second feature points in the second image, identifying an edge pixel in said second image that is nearest to an expected position relative to the N second feature points as determined from the fitted rigid transform, the identified nearest edge pixel T(p) being said potential corresponding pixel.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×