Method for coordinating multiple fields of view in multi-camera
First Claim
1. A method for precisely coordinating multiple fields of view of a plurality of fixed cameras so as to facilitate determining the distance between features each disposed within a different field of view, the method being especially useful when there is image distortion in each field of view, the method comprising the steps of:
- at calibration-time,fixing a substantially rigid dimensionally-stable substrate including a plurality of calibration targets each having a reference feature, such that a calibration target is within the field of view of each said camera;
acquiring relative displacement information regarding at least the linear displacement of each said reference feature with respect to at least one other said reference feature;
for each camera, acquiring an image of a calibration target to provide a plurality of acquired calibration target images;
generating an image distortion-correction map for each acquired calibration target image, andat run-time,for each camera, acquiring an image of a portion of said object to provide a plurality of partial object images;
applying said image distortion-correction map to each partial object image to provide a plurality of corrected partial object images; and
using said relative displacement information to determine the relative displacement of a first point in a first corrected partial object image with respect to a second point in a second corrected partial object image.
1 Assignment
0 Petitions
Accused Products
Abstract
A method is provided for use in a multi-camera machine vision system wherein each of a plurality of cameras simultaneously acquires an image of a different portion of an object of interest. The invention makes it possible to precisely coordinate the fields of view of the plurality of cameras so that accurate measurements can be precisely performed across multiple fields of view, even in the presence of image distortion within each field of view. The method includes the steps of, at calibration-time, fixing the plurality of cameras with respect to a substantially rigid dimensionally-stable substrate including a plurality of calibration targets each having a reference feature. For each camera, an image of a calibration target is acquired to provide a plurality of acquired calibration target images. Then a distortion-correction map is generated for each acquired calibration target image. At run-time, for each camera, an image is acquired, at least two of the images including a portion of the object to provide a plurality of partial object images. These partial object images are then transformed by a distortion-correction map to provide a plurality of corrected partial object images. Next, relative displacement information is used to determine the relative displacement of a first point in a first corrected partial object image with respect to a second point in a second corrected partial object image. A combined map can be generated that both corrects image distortion, and transforms local camera coordinates into global coordinates.
-
Citations
20 Claims
-
1. A method for precisely coordinating multiple fields of view of a plurality of fixed cameras so as to facilitate determining the distance between features each disposed within a different field of view, the method being especially useful when there is image distortion in each field of view, the method comprising the steps of:
-
at calibration-time, fixing a substantially rigid dimensionally-stable substrate including a plurality of calibration targets each having a reference feature, such that a calibration target is within the field of view of each said camera; acquiring relative displacement information regarding at least the linear displacement of each said reference feature with respect to at least one other said reference feature; for each camera, acquiring an image of a calibration target to provide a plurality of acquired calibration target images; generating an image distortion-correction map for each acquired calibration target image, and at run-time, for each camera, acquiring an image of a portion of said object to provide a plurality of partial object images; applying said image distortion-correction map to each partial object image to provide a plurality of corrected partial object images; and using said relative displacement information to determine the relative displacement of a first point in a first corrected partial object image with respect to a second point in a second corrected partial object image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method for precisely coordinating multiple fields of view of a plurality of fixed cameras so as to facilitate determining the distance between features each disposed within a different field of view, the method being especially useful when there is image distortion in each field of view, the method comprising the steps of:
-
at calibration-time, fixing a plurality of cameras with respect to a calibration plate including a plurality of calibration targets; for each camera, generating an image distortion-correction map; for each camera, generating a transformation of camera coordinates into global coordinates; and at run-time, for each camera, acquiring an image of a portion of said object to provide a plurality of partial object images; applying said image distortion-correction map to each partial object image to provide a plurality of corrected partial object images; and using said transformation of camera coordinates into global coordinates to determine the relative displacement of a first feature point in a first corrected partial object image with respect to a second feature point in a second corrected partial object image. - View Dependent Claims (14, 15, 16)
-
-
17. A method for precisely coordinating multiple fields of view of a plurality of fixed cameras so as to facilitate determining the distance between features each disposed within a different field of view, the method being especially useful when there is image distortion in each field of view, the method comprising the steps of:
-
at calibration-time, fixing a plurality of cameras with respect to a calibration plate including a plurality of calibration targets; for each camera, generating an image distortion-correction map using said plurality of calibration targets; for each camera, generating a transformation of local camera coordinates into global coordinates using said plurality of calibration targets; combining said image distortion-correction map and said transformation of local camera coordinates into global coordinates to provide a corrected local-to-global map for determining the distance between features each disposed within a different field of view, even when there is image distortion in each field of view; and at run-time, for each camera, acquiring an image of a portion of said object to provide a plurality of partial object images; applying said corrected local-to-global map to determine the relative displacement of a first feature point in a first partial object image with respect to a second feature point in a second partial object image. - View Dependent Claims (18, 19, 20)
-
Specification