System for Calibrating a Vision System
First Claim
1. Logic for calibrating a vision system of an object, configured to:
- capture, via a first image sensor of the vision system, a first image of a plurality of objects;
capture, via a second image sensor of the vision system, a second image of the plurality of objects, where the plurality of objects is located in an overlapping region of a field of view of the first image sensor and a field of view of the second image sensor;
determine a position and orientation of the first and the second image sensor relative to the plurality of objects based on the first and the second captured image, respectively;
determine a relative position and orientation between the first and the second image sensor based on a result of the determining a position and orientation of the first and the second image sensor;
determine extrinsic parameters of the first and the second image sensor in a reference frame based on results of the determining a relative position and orientation between the first and the second image sensor; and
determine intrinsic parameters of first and the second image sensor based on the first and the second captured image, respectively.
3 Assignments
0 Petitions
Accused Products
Abstract
A system for calibrating a vision system of an object. For example, a system for calibrating a surround view system of a vehicle. The system may be configured to capture, via a first and second image sensor, a first and second image of a plurality of objects. Upon capturing such information, the system may be configured to determine a position and orientation of the first and the second image sensor relative to the plurality of objects based on the first and the second captured image. Further, the system may be configured to determine a relative position and orientation between the first and the second image sensor based on a result of the determining a position and orientation of the first and the second image sensor. Also, the system may be configured to determine intrinsic and extrinsic parameters of the first and the second image sensor.
-
Citations
20 Claims
-
1. Logic for calibrating a vision system of an object, configured to:
-
capture, via a first image sensor of the vision system, a first image of a plurality of objects; capture, via a second image sensor of the vision system, a second image of the plurality of objects, where the plurality of objects is located in an overlapping region of a field of view of the first image sensor and a field of view of the second image sensor; determine a position and orientation of the first and the second image sensor relative to the plurality of objects based on the first and the second captured image, respectively; determine a relative position and orientation between the first and the second image sensor based on a result of the determining a position and orientation of the first and the second image sensor; determine extrinsic parameters of the first and the second image sensor in a reference frame based on results of the determining a relative position and orientation between the first and the second image sensor; and determine intrinsic parameters of first and the second image sensor based on the first and the second captured image, respectively. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method for calibrating a vision system of an object, comprising:
-
capturing, via a first image sensor of the vision system, a first image of a plurality of objects; capturing, via a second image sensor of the vision system, a second image of the plurality of objects, where the plurality of objects is located in an overlapping region of a field of view of the first image sensor and a field of view of the second image sensor; determining a position and orientation of the first and the second image sensor relative to the plurality of objects based on the first and the second captured image, respectively; determining a relative position and orientation between the first and the second image sensor based on a result of the determining a position and orientation of the first and the second image sensor; and determining a relative position and orientation between a first pair image sensor and a second pair image sensor based on a result of the determining a relative position and orientation between the first and the second image sensor for the first pair and the second pair, respectively, wherein the first pair of the plurality of pairs of image sensors includes the first pair image sensor and an intermediate image sensor, and wherein the second pair of the plurality of pairs of image sensors includes the intermediate image sensor and the second pair image sensor. - View Dependent Claims (15, 16)
-
-
17. A vehicle vision system of a vehicle, comprising:
-
a plurality of image sensors each comprising an electro-optical component to capture images comprising a non-linear distortion, wherein the image sensors are located at different locations on a vehicle hosting the vehicle vision system; an image processing unit configured to process image data captured by the plurality of image sensors; and calibration logic, wherein the calibration logic is coupled to the image processing unit, and wherein the calibration logic is configured to; identify, via a first image sensor of the vehicle vision system, a first image of a plurality of objects; identify, via a second image sensor of the vehicle vision system, a second image of the plurality of objects, where the plurality of objects is located in an overlapping region of a field of view of the first image sensor and a field of view of the second image sensor; determine a position and orientation of the first and the second image sensor relative to the plurality of objects based on the first and the second captured image, respectively; determine a relative position and orientation between the first and the second image sensor based on a result of the determining a position and orientation of the first and the second image sensor; determine extrinsic parameters of the first and the second image sensor in a reference frame based on results of the determining a relative position and orientation between the first and the second image sensor; and determine intrinsic parameters of first and the second image sensor based on the first and the second captured image, respectively. - View Dependent Claims (18, 19, 20)
-
Specification