Calibrating camera offsets to facilitate object position determination using triangulation
First Claim
1. A method of determining the position of an object relative to a rectangular reference frame from captured images of the object based on multiple triangulation results, the captured images being taken by at least two pair of cameras at the corners of said reference frame having fields of view encompassing said reference frame, each of said cameras having an offset angle resulting in an extremity of the field of view thereof extending beyond a boundary of said reference frame, said method comprising the steps of:
- capturing an image of the object using each camera of said at least two pair at at least one location within said reference frame;
for each location;
determining the position of the object within each captured image and for each captured image placing the determined position into a coordinate system corresponding to that of said reference frame, wherein the determined position of the object within each image is represented by an angle φ
, said angle being equal to the angle formed between the extremity of the field of view extending beyond the reference frame boundary and a line extending from the camera that intersects the object within the image; and
processing the determined positions to determine the position of the object at each location and the offset angle of said at least one camera, wherein during said processing each said angle φ
is converted to an angle ω
, said angle ω
being represented by;
ω
=α
−
δ
where;
δ
is the camera offset angle; and
α
is equal to the angle φ
with the camera offset angle removed and referenced to the y-axis of the reference frame coordinate system and wherein each said angle ω
is fitted to the equation;
where;
xcam and ycam are the rectangular coordinates of the camera; and
xi and yi are the rectangular coordinates of the object, thereby to yield the rectangular position (xi, yi) and the camera offset angle.
8 Assignments
0 Petitions
Accused Products
Abstract
A touch system includes a reference frame, and at least two cameras having fields of view that overlap within the reference frame. The position of an object relative to the reference frame is determined from captured images of the object based on triangulation. The fields of view of the at least two cameras are rotated with respect to the coordinate system of the reference frame to define offset angles. The touch system is calibrated by: capturing an image of the object using each the at least two cameras at least one location within the reference frame; and for each location: determining the position of the object within each image, the position of the object within each image being represented by an angle φ, the angle being equal to the angle formed between an extremity of the field of view extending beyond the reference frame and a line extending from the camera that intersects the object within the image; and mathematically calculating the offset angles of the at least two cameras based on the angle determined for each image and the position of the at least two cameras relative to the coordinate system assigned to the reference frame.
142 Citations
33 Claims
-
1. A method of determining the position of an object relative to a rectangular reference frame from captured images of the object based on multiple triangulation results, the captured images being taken by at least two pair of cameras at the corners of said reference frame having fields of view encompassing said reference frame, each of said cameras having an offset angle resulting in an extremity of the field of view thereof extending beyond a boundary of said reference frame, said method comprising the steps of:
-
capturing an image of the object using each camera of said at least two pair at at least one location within said reference frame;
for each location;
determining the position of the object within each captured image and for each captured image placing the determined position into a coordinate system corresponding to that of said reference frame, wherein the determined position of the object within each image is represented by an angle φ
, said angle being equal to the angle formed between the extremity of the field of view extending beyond the reference frame boundary and a line extending from the camera that intersects the object within the image; and
processing the determined positions to determine the position of the object at each location and the offset angle of said at least one camera, wherein during said processing each said angle φ
is converted to an angle ω
, said angle ω
being represented by;
ω
=α
−
δwhere; δ
is the camera offset angle; and
α
is equal to the angle φ
with the camera offset angle removed and referenced to the y-axis of the reference frame coordinate system and wherein each said angle ω
is fitted to the equation;
where; xcam and ycam are the rectangular coordinates of the camera; and
xi and yi are the rectangular coordinates of the object, thereby to yield the rectangular position (xi, yi) and the camera offset angle. - View Dependent Claims (2, 3)
-
-
4. A method of determining the position of an object relative to a reference frame from captured images of the object based on multiple triangulation results, the captured images being taken by at least two pair of cameras having fields of view encompassing the reference frame, an extremity of the field of view of each camera encompassing a boundary of said reference frame, at least one of said cameras being offset causing the extremity of the field of view thereof to extend beyond said boundary, the offset defining an offset angle, said method comprising the steps of:
-
determining the position of the object within each image, the position of the object within each image being represented by an angle, said angle being equal to the angle formed between the extremity of the field of view of the camera that acquired the image and a line extending from that camera that intersects the object within the image;
determining the offset angle for each offset camera;
for each offset camera subtracting the offset angle from the angle representing the position of the object within the image taken by said offset camera to calibrate the angle; and
for each pair of cameras using the calibrated angles to calculate the position of the object with respect to the reference frame using triangulation. - View Dependent Claims (5, 10, 11)
-
-
6. In a touch system including at least two pair of cameras and a processor to process images acquired by said at least two pair cameras, where the position of an object that is within the fields of view of said cameras relative to a reference frame is determined by triangulating object position data in images acquired by the cameras of each pair, a method of calibrating the touch system comprising the steps of:
-
determining an offset angle of each camera relative to the reference frame, said offset angle representing the degree by which the field of view of the camera extends beyond said reference frame;
for each camera, using the offset angle to calibrate the object position data developed from the image acquired by that camera; and
using the calibrated object position data during triangulation for each pair of cameras to determine the position of said object relative to said reference frame. - View Dependent Claims (12, 13)
-
-
7. In a touch system including a reference frame, and at least two pair of cameras having fields of view that encompass said reference frame, wherein the position of an object relative to the reference frame is determined from captured images of the object based on multiple triangulation results, and wherein the fields of view of at least some of said cameras are rotated with respect to the coordinate system of said reference frame to define offset angles, a method of calibrating said touch system comprising the steps of:
-
capturing an image of the object using each camera of said at least two pair at at least one location within said reference frame; and
for each location;
determining the position of the object within each captured image, the position of the object within each captured image being represented by an angle φ
, said angle being equal to the angle formed between an extremity of the field of view of the camera that acquired the image extending beyond the reference frame and a line extending from that camera that intersects the object within the image; and
mathematically calculating the offset angles of the cameras having rotated fields of view based on the angle determined for each image and the position of the cameras relative to the coordinate system assigned to said reference frame. - View Dependent Claims (8, 14, 15, 16, 17)
-
-
9. A touch system comprising:
-
a generally rectangular reference frame surrounding a touch surface, one corner of the reference frame defining the origin of a coordinate system assigned to said touch surface;
a camera adjacent each corner of the reference frame, each camera being aimed towards said touch surface and capturing images of said touch surface within the field of view thereof, fields of view of said cameras overlapping within said reference frame, the fields of view of said cameras being offset with respect to said reference frame; and
a processor processing the captured images and generating object position data when an object appears in images, said processor determining the position of said object relative to said origin in rectangular coordinates using said object position data based on multiple triangulation results, wherein said processor further executes a calibration routine to determine offset angles of said cameras, said offset angles being used by said processor to adjust said object position data thereby to align said multiple triangulation results prior to said position determination. - View Dependent Claims (18)
-
-
19. A touch system comprising:
-
a substantially rectangular touch surface;
imaging devices mounted adjacent at least three corners of said touch surface to define at least two triangulation pair of imaging devices, each imaging device having a field of view looking across said touch surface, said imaging devices being oriented to capture overlapping images of said touch surface; and
at least one processing device processing captured images to determine the position of at least one pointer appearing in the captured images based on multiple triangular results, the fields of view of said imaging devices being calibrated by said at least one processing device to determine offset angles of said imaging devices prior to determining the position of the at least one pointer thereby to align said multiple triangulation results. - View Dependent Claims (20, 21, 22, 27, 28)
-
-
23. A user input system comprising:
-
at least two pair of imaging devices having overlapping fields of view oriented to capture images of a region of interest in which at least one pointer can be positioned; and
at least one processing device processing pointer data extracted from the captured images acquired by the imaging devices using triangulation to yield a triangulation result for each pair of imaging devices thereby to determine the position of said at least one pointer within said region of interest, said at least one processing device adjusting the pointer data prior to processing by determining offset angles of said imaging devices to compensate for fields of view of said imaging devices that extend beyond the periphery of said region of interest thereby to align the triangulation results. - View Dependent Claims (24, 25, 26, 29, 30, 31, 32, 33)
-
Specification