Method and a system for calibrating an image capture device
First Claim
1. A method for calibrating an image capture device mounted on a vehicle for offset from at least one of an ideal position and an ideal angular orientation about a focal axis of the image capture device while the vehicle is moving, the method comprising:
- selecting an image frame captured by the image capture device as a reference image frame when image data thereof in a time domain in at least one portion of the image frame is indicative of a high spatial frequency content greater than a predefined high spatial frequency content, wherein the image capture device is non-stereoscopic;
selecting, as a target image, an image of a stationary object in an environment external of the vehicle in the reference image frame suitable as a target object;
predicting a location at which an image of the target object should appear in a subsequently captured image frame in response to at least one parameter of the moving vehicle;
comparing the actual location of the image of the target object in the subsequently captured image frame with the predicted location thereof, wherein the image frame and the subsequently captured image frame are both captured by the same image capture device;
determining from the comparison of the actual and the predicted locations of the images of the target object whether the image capture device is offset from the at least one of the ideal position and the ideal angular orientation; and
determining calibration data from the difference between the actual and the predicted locations of the image of the target object for applying a change in the field of view of the image capture device to image frames later captured by the image capture device for correcting the later captured image frames for the offset of the image capture device from the at least one of the ideal position and the ideal angular orientation when the image capture device is determined to be offset therefrom,wherein the at least one parameter of the moving vehicle is a steering angle input,wherein the captured image frames are initially corrected for at least one selected from a group consisting of perspective distortion and fisheye distortion,wherein the target image is selected from an area of predefined size forming a target image area in the reference image frame, wherein the location at which the image area corresponding to the target image area should appear in each of the subsequently captured image frames is predicted, and the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area for determining the difference between the actual location of the image of the target object with the predicted location thereof,wherein the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area by comparing image data in a frequency domain representative of the actual image of the area corresponding to the target image area at the predicted locations with image data in a frequency domain representative of the target image area,wherein the image data in the frequency domain representative of the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the image data in the frequency domain representative of the target image area by a cross-correlation process to produce a joint power spectrum of the actual image of the area at the corresponding predicted location and the target image area, andwherein the location of a peak of maximum peak value in a correlation plane of each joint power spectrum is used for determining the difference between the actual location of the image of the target object and the predicted location thereof for the corresponding subsequently captured image plane.
3 Assignments
0 Petitions
Accused Products
Abstract
A system (1) for calibrating an image capture device (2) mounted on a motor vehicle for offset from an ideal position and an ideal angular orientation while the vehicle is moving comprises selecting an image (12) in a captured image frame (10) of a stationary object relative to the motor vehicle, which is capable of being tracked through a plurality of successively captured image frames (14a,14d), predicting the locations at which the target image (12b) should appear in the respective successively captured image frames (14a,14d), comparing the actual location of the target image (12a) in the respective successively captured image frames (14a,14d) with the respective predicted locations (12b) and determining calibration values for the camera (2) from the results of the comparison, in the event that the actual (12a) and predicted locations (12b) of the target image in the respective image frames (14a,14d) do not coincide.
19 Citations
14 Claims
-
1. A method for calibrating an image capture device mounted on a vehicle for offset from at least one of an ideal position and an ideal angular orientation about a focal axis of the image capture device while the vehicle is moving, the method comprising:
-
selecting an image frame captured by the image capture device as a reference image frame when image data thereof in a time domain in at least one portion of the image frame is indicative of a high spatial frequency content greater than a predefined high spatial frequency content, wherein the image capture device is non-stereoscopic; selecting, as a target image, an image of a stationary object in an environment external of the vehicle in the reference image frame suitable as a target object; predicting a location at which an image of the target object should appear in a subsequently captured image frame in response to at least one parameter of the moving vehicle; comparing the actual location of the image of the target object in the subsequently captured image frame with the predicted location thereof, wherein the image frame and the subsequently captured image frame are both captured by the same image capture device; determining from the comparison of the actual and the predicted locations of the images of the target object whether the image capture device is offset from the at least one of the ideal position and the ideal angular orientation; and determining calibration data from the difference between the actual and the predicted locations of the image of the target object for applying a change in the field of view of the image capture device to image frames later captured by the image capture device for correcting the later captured image frames for the offset of the image capture device from the at least one of the ideal position and the ideal angular orientation when the image capture device is determined to be offset therefrom, wherein the at least one parameter of the moving vehicle is a steering angle input, wherein the captured image frames are initially corrected for at least one selected from a group consisting of perspective distortion and fisheye distortion, wherein the target image is selected from an area of predefined size forming a target image area in the reference image frame, wherein the location at which the image area corresponding to the target image area should appear in each of the subsequently captured image frames is predicted, and the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area for determining the difference between the actual location of the image of the target object with the predicted location thereof, wherein the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area by comparing image data in a frequency domain representative of the actual image of the area corresponding to the target image area at the predicted locations with image data in a frequency domain representative of the target image area, wherein the image data in the frequency domain representative of the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the image data in the frequency domain representative of the target image area by a cross-correlation process to produce a joint power spectrum of the actual image of the area at the corresponding predicted location and the target image area, and wherein the location of a peak of maximum peak value in a correlation plane of each joint power spectrum is used for determining the difference between the actual location of the image of the target object and the predicted location thereof for the corresponding subsequently captured image plane. - View Dependent Claims (2, 3, 4, 5, 6, 8, 9, 13, 14)
-
-
7. A method for calibrating an image capture device mounted on a vehicle for offset from at least one of an ideal position and an ideal angular orientation about a focal axis of the image capture device while the vehicle is moving, the method comprising:
-
selecting an image frame captured by the image capture device as a reference image frame when image data thereof in a time domain in at least one portion of the image frame is indicative of a high spatial frequency content greater than a predefined high spatial frequency content, wherein the image capture device is non-stereoscopic; selecting, as a target image, an image of a stationary object in an environment external of the vehicle in the reference image frame suitable as a target object; predicting a location at which an image of the target object should appear in a subsequently captured image frame in response to at least one parameter of the moving vehicle; comparing the actual location of the image of the target object in the subsequently captured image frame with the predicted location thereof, wherein the image frame and the subsequently captured image frame are both captured by the same image capture device; determining from the comparison of the actual and the predicted locations of the images of the target object whether the image capture device is offset from the at least one of the ideal position and the ideal angular orientation; and determining calibration data from the difference between the actual and the predicted locations of the image of the target object for applying a change in the field of view of the image capture device to image frames later captured by the image capture device for correcting the later captured image frames for the offset of the image capture device from the at least one of the ideal position and the ideal angular orientation when the image capture device is determined to be offset therefrom, wherein the at least one parameter of the moving vehicle is a steering angle input, wherein the captured image frames are initially corrected for at least one selected from a group consisting of perspective distortion and fisheye distortion, wherein the target image is selected from an area of predefined size forming a target image area in the reference image frame, and wherein the location at which the image area corresponding to the target image area should appear in each of the subsequently captured image frames is predicted, and the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area for determining the difference between the actual location of the image of the target object with the predicted location thereof, wherein the actual image of the area corresponding to the target image area at the predicted locations in each of the subsequently captured image frames is compared with the target image area by a template matching process, wherein the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area by comparing image data in a frequency domain representative of the actual image of the area corresponding to the target image area at the predicted locations with image data in a frequency domain representative of the target image area, wherein the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area by comparing image data in a frequency domain representative of the actual image of the area corresponding to the target image area at the predicted locations with image data in a frequency domain representative of the target image area, wherein the image data in the frequency domain representative of the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the image data in the frequency domain representative of the target image area by a cross-correlation process to produce a joint power spectrum of the actual image of the area at the corresponding predicted location and the target image area, and wherein the location of a peak of maximum peak value in a correlation plane of each joint power spectrum is used for determining the difference between the actual location of the image of the target object and the predicted location thereof for the corresponding subsequently captured image plane.
-
-
10. A system for calibrating an image capture device mounted on a vehicle for offset from at least one of an ideal position and an ideal angular orientation while the vehicle is moving, the system comprising:
-
a microprocessor programmed to perform operations comprising; selecting an image frame captured by the image capture device as a reference image frame, when image data thereof in a time domain in at least one portion of the image frame is indicative of a high spatial frequency content greater than a predefined high spatial frequency content, wherein the image capture device is non-stereoscopic; selecting as a target image an image of a stationary object in an environment external of the vehicle in the reference image frame suitable as a target object; predicting a location at which an image of the target object should appear in a subsequently captured image frame in response to at least one parameter of the moving vehicle; comparing the actual location of the image of the target object in the subsequently captured image frame with the predicted location thereof, wherein the image frame and the subsequently captured image frame are both captured by the same image capture device; determining from the comparison of the actual and the predicted locations of the images of the target object whether the image capture device is offset from the at least one of the ideal position and the ideal angular orientation; and determining calibration data from the difference between the actual and the predicted locations of the images of the target object for applying a change in the field of view of the image capture device to image frames later captured by the image capture device for correcting the later captured image frames for the offset of the image capture device from the at least one of the ideal position and the ideal angular orientation when the image capture device is determined to be offset therefrom, wherein the at least one parameter of the moving vehicle is a steering angle input, wherein the captured image frames are initially corrected for at least one selected from a group consisting of perspective distortion and fisheye distortion, wherein the target image is selected from an area of predefined size forming a target image area in the reference image frame, wherein the location at which the image area corresponding to the target image area should appear in each of the subsequently captured image frames is predicted, and the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the target image area for determining the difference between the actual location of the image of the target object with the predicted location thereof, wherein the image data in the frequency domain representative of the actual image of the area corresponding to the target image area at the predicted location in each of the subsequently captured image frames is compared with the image data in the frequency domain representative of the target image area by a cross-correlation process to produce a joint power spectrum of the actual image of the area at the corresponding predicted location and the target image area, and wherein the location of a peak of maximum peak value in a correlation plane of each joint power spectrum is used for determining the difference between the actual location of the image of the target object and the predicted location thereof for the corresponding subsequently captured image plane. - View Dependent Claims (11, 12)
-
Specification