Method and apparatus for replacing target zones in a video sequence
First Claim
1. A method for replacing a representation of a target zone with a pattern in successive images of a video sequence, suitable for use while the target zone has a position with respect to a background scene which changes during the video sequence, comprising, for each of successive images of the video sequence, the steps of:
- (a) assuming that said position changes at constant speed, determining a global transformation predicted from a stored reference image, generating a modified image by applying the global transformation to the reference image, and adjusting the predicted transformation through a global analysis of the image;
(b) recognizing said representation of the target zone from a colour thereof and extracting boundaries thereof by segmentation; and
(c) verifying a coherence of the recognition, subjecting said pattern to a transformation responsive to the representation recognized at step (b) and substituting the representation of the target zone, by the transformed pattern.
1 Assignment
0 Petitions
Accused Products
Abstract
The method for replacing a target billboard in the frames of a video sequence is usable while the billboard undergoes changes of position in the scene. Assuming that the speed of change is constant, a global transformation from a reference image stored in memory is predicted. A modified image is generated by applying the transformation to the reference image and the prediction is adjusted by a global analysis of the image. Then the representation of the billboard is recovered from its color, and its boundaries are extracted by segmentation. The representation of the billboard is substituted by the representation of the model, after the model is warped by a transformation.
40 Citations
14 Claims
-
1. A method for replacing a representation of a target zone with a pattern in successive images of a video sequence, suitable for use while the target zone has a position with respect to a background scene which changes during the video sequence, comprising, for each of successive images of the video sequence, the steps of:
-
(a) assuming that said position changes at constant speed, determining a global transformation predicted from a stored reference image, generating a modified image by applying the global transformation to the reference image, and adjusting the predicted transformation through a global analysis of the image;
(b) recognizing said representation of the target zone from a colour thereof and extracting boundaries thereof by segmentation; and
(c) verifying a coherence of the recognition, subjecting said pattern to a transformation responsive to the representation recognized at step (b) and substituting the representation of the target zone, by the transformed pattern. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
(d1) computing a predicted local transform;
(d2) refining an estimation of interest points in the target zone using a 2-D correlation around the interest points; and
computing a geometric transform relating positions of the interest points in the reference image to their position in the current image of the video sequence.
-
-
3. Method according to claim 2, wherein step (d1) makes use of Kalman filtering on successive images.
-
4. A method according to claim 2, wherein step (d2) is performed using a correlation of neighborhoods of several representations of the reference image at different scales.
-
5. A method according to claim 2, comprising estimating the position and appearance of the representation of the target zone in each current image during step (c).
-
6. The method of claim 5, wherein estimation is made using the last correct estimation provided by pattern recognition and data provided by local tracking if quality is unsatisfactory.
-
7. The method of claim 6, wherein estimation is made using information provided by pattern recognition after it is found that quality of information obtained by said pattern recognition is satisfactory.
-
8. A method according to claim 1, wherein a specific color is assigned to the target zone and identification thereof in the current image is from said specific color and its shape.
-
9. A method according to claim 8, wherein the target zone is of rectangular shape when not distorted.
-
10. A method according to claim 9, wherein said rectangular shape has a format of 4×
- 3 or 19×
9 and the pattern originates from a video camera or a V.C.R.
- 3 or 19×
-
11. The method of claim 1, wherein the stored reference image used for predicting said global transformation is periodically refreshed by storing the current image of the video sequence responsive to any one of the following situations:
-
when the current image is a first image to be processed in the video sequence, when the current image becomes different from the reference image in excess of a predetermined amount, when the reference image is older than a predetermined time.
-
-
12. The method of claim 11, wherein said predetermined amount relates to motion or change in focal length exceeding a predefined threshold or the presence of occluding obstacles.
-
13. The method of claim 11, comprising re-sampling the reference image after computation of a predicted transformation by:
- applying the predicted transformation in order to obtain a deformed and shifted image;
roughly estimating a translation from the reference image to the current image, using a correlation process;
re-sampling the reference using the available rough rather than prediction; and
evaluating the transformation required for passage from the reference image to the current image using several iterations based on a gradient approach.
- applying the predicted transformation in order to obtain a deformed and shifted image;
-
14. An apparatus for replacing a representation of a target zone with a representation of a pattern in successive images of a video sequence, suitable for use while the target zone has a position with respect to a background scene which changes during the video sequence, comprising a plurality of micro-computers programmed respectively for:
-
(a) assuming that said position changes at constant speed, determining a global transformation predicted from a stored reference image, generating a modified image by applying the global transformation to the reference image, and adjusting the predicted transformation through a global analysis of the image;
(b) recognizing said representation of the target zone from a colour thereof and extracting boundaries thereof by segmentation; and
(c) verifying a coherence of the recognition, subjecting said pattern to a transformation is responsive to the representation recognized at step (b) and substituting the representation of the target zone, by the transformed pattern.
-
Specification