Efficient And Accurate 3D Object Tracking
First Claim
Patent Images
1. A method of tracking an object in an input image stream, said method comprising iteratively applying the steps of:
- (a) rendering a three-dimensional object model according to a previously predicted state vector from a previous tracking loop or a state vector from an initialisation step;
(b) extracting a series of point features from said rendered object;
(c) localising corresponding point features in said input image stream;
(d) deriving a new state vector from said localised point feature in the input image stream.
2 Assignments
0 Petitions
Accused Products
Abstract
A method of tracking an object in an input image stream, the method comprising iteratively applying the steps of: (a) rendering a three-dimensional object model according to a previously predicted state vector from a previous tracking loop or the state vector from an initialisation step; (b) extracting a series of point features from the rendered object; (c) localising corresponding point features in the input image stream; (d) deriving a new state vector from the point feature locations in the input image stream.
56 Citations
16 Claims
-
1. A method of tracking an object in an input image stream, said method comprising iteratively applying the steps of:
-
(a) rendering a three-dimensional object model according to a previously predicted state vector from a previous tracking loop or a state vector from an initialisation step; (b) extracting a series of point features from said rendered object; (c) localising corresponding point features in said input image stream; (d) deriving a new state vector from said localised point feature in the input image stream. - View Dependent Claims (2, 3, 4, 5, 6, 7, 12, 13)
-
-
8. A method of tracking an object in an input image stream, the method comprising steps of:
-
(i) creating a three-dimensional model of said object to be tracked; (ii) localising initial features points in an initial said input image stream; (iii) calculating an initial state vector indicative of said object location within said input image stream, wherein said initial state vector is calculated by minimising the square error between said initial localised feature points and corresponding initial feature points of said three-dimensional model projected into the image plane; (a) rendering a three-dimensional object model, wherein said object model accords with either said predicted state vector calculated in step (d) of a previous iteration or said initial state vector calculated in step (ii), wherein said rendering includes calculating a mask for said input image stream to distinguishing between background and foreground pixels; (b) calculating a predefined number of point features from said rendered object, wherein said predefined number of locations having highest edginess are selected as features from said rendered image of the previous iteration for the following localisation step; (c) localising corresponding point features in said input image stream; (d) calculating a new state vector from said localised point features in said input image stream; and (e) iteratively performing steps (a) though (d) for providing at each iteration updated said new state vector from said localised point features. - View Dependent Claims (9, 10, 11)
-
-
14. A system for tracking an object in an input image stream, the system comprising a processor adapted to receive an input image stream, said processor is further adapted to perform the steps of:
-
(i) creating a three-dimensional model of said object to be tracked; (ii) localising initial features points in an initial said input image stream; (iii) calculating an initial state vector indicative of said object location within said input image stream, wherein said initial state vector is calculated by minimising the square error between said initial localised feature points and corresponding initial feature points of said three-dimensional model projected into the image plane; (a) rendering a three-dimensional object model, wherein said object model accords with either said predicted state vector calculated in step (d) of a previous iteration or said initial state vector calculated in step (ii), wherein said rendering includes calculating a mask for said input image stream to distinguishing between background and foreground pixels; (b) calculating a predefined number of point features from said rendered object, wherein said predefined number of locations having highest edginess are selected as features from said rendered image of the previous iteration for the following localisation step; (c) localising corresponding point features in said input image stream; (d) calculating a new state vector from said localised point features in said input image stream; and (e) iteratively performing steps (a) though (d) for providing at each iteration updated said new state vector from said localised point features. - View Dependent Claims (15, 16)
-
Specification