Hybrid precision tracking
First Claim
Patent Images
1. A hybrid through-the-lens tracking method, comprising:
- reading current incoming camera position, orientation, and adjustable field-of-view/zooming and focusing lens information of an actual scene camera, wherein the lens information includes lens position information;
deriving current lens parameters of the actual scene camera using the lens position information;
predicting the 2D locations of actual markers in a live action scene using (a) the current lens parameters, (b) the current incoming camera position and orientation information and (c) the lens position information;
detecting the 2D locations of the actual markers in the live action scene;
the detecting including derivative detecting to find sharp edges of the actual markers, center detecting to find possible centers of circles, and center voting to detect actual centers of the circles;
calculating angular corrections for a virtual camera from the current incoming camera position and orientation information, the predicted 2D locations, and the detected 2D locations; and
using the angular corrections, correcting the orientation of the current incoming camera orientation information of the actual scene camera to match the 2D locations of the actual markers.
1 Assignment
0 Petitions
Accused Products
Abstract
Disclosed herein are through-the-lens tracking systems and methods which can enable sub-pixel accurate camera tracking suitable for real-time set extensions. That is, the through-the-lens tracking can make an existing lower precision camera tracking and compositing system into a real-time VFX system capable of sub-pixel accurate real-time camera tracking. With this enhanced level of tracking accuracy the virtual cameras can be used to register and render real-time set extensions for both interior and exterior locations.
34 Citations
31 Claims
-
1. A hybrid through-the-lens tracking method, comprising:
-
reading current incoming camera position, orientation, and adjustable field-of-view/zooming and focusing lens information of an actual scene camera, wherein the lens information includes lens position information; deriving current lens parameters of the actual scene camera using the lens position information; predicting the 2D locations of actual markers in a live action scene using (a) the current lens parameters, (b) the current incoming camera position and orientation information and (c) the lens position information; detecting the 2D locations of the actual markers in the live action scene; the detecting including derivative detecting to find sharp edges of the actual markers, center detecting to find possible centers of circles, and center voting to detect actual centers of the circles; calculating angular corrections for a virtual camera from the current incoming camera position and orientation information, the predicted 2D locations, and the detected 2D locations; and using the angular corrections, correcting the orientation of the current incoming camera orientation information of the actual scene camera to match the 2D locations of the actual markers. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A hybrid through-the-lens tracking method, comprising:
-
reading current incoming camera position, orientation, and adjustable field-of-view/zooming and focusing lens information of an actual scene camera, wherein the lens information includes lens position information; the current incoming camera position, orientation, and adjustable field-of-view/zooming and focusing lens information being real time information; the lens position information being on a per frame basis; deriving current lens parameters of the actual scene camera using the lens position information; the deriving including using a look-up table that correlates lens position information with optical parameters of the lens of the actual scene camera; predicting the 2D locations of actual markers in a live action scene using (a) the current lens parameters, (b) the current incoming camera position and orientation information and (c) the lens position information; detecting the 2D locations of the actual markers in the live action scene; calculating angular corrections for a virtual camera from the current incoming camera position and orientation information, the predicted 2D locations and the detected 2D locations; and using the angular corrections, correcting the orientation of the current incoming camera orientation information of the actual scene camera to match the 2D locations of the actual markers. - View Dependent Claims (19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31)
-
Specification