Visualizing video within existing still images
First Claim
1. A computing device comprising:
- one or more processing units; and
one or more computer-readable storage media comprising computer-executable instructions which, when executed by the one or more processing units, cause the computing device to;
obtain a sample of a video image feed;
select one or more pre-existing still images whose image scope comprises at least a portion of an image scope of the obtained sample of the video image feed, the selecting being informed by location metadata associated with both the video image feed and the still images;
derive an average image comprising those elements that remain static throughout the sample of the video image feed;
derive a motion mask identifying areas in which elements move throughout the sample of the video image feed;
identifying image features in the average image that are along the motion mask, having corresponding image features in the selected one or more still images; and
derive transformation parameters to transform and align the video image feed such that the identified image features in the sample of the video image feed, after transformation and alignment, have a visual size and visual appearance equivalent to that of the corresponding image features in the selected one or more still images, and can be equivalently overlaid over the corresponding image features in the selected one or more still images.
2 Assignments
0 Petitions
Accused Products
Abstract
Video from a video camera can be integrated into a still image, with which it shares common elements, to provide greater context and understandability. Pre-processing can derive transformation parameters for transforming and aligning the video to be integrated into the still image in a visually fluid manner. The transformation parameters can then be utilized to transform and align the video in real-time and display it within the still image. Pre-processing can comprise stabilization of video, if the video camera is moveable, and can comprise identification of areas of motion and of static elements. Transformation parameters can be derived by fitting the static elements of the video to portions of one or more existing images. Display of the video in real-time in the still image can include display of the entire transformed and aligned video image, or of only selected sections, to provide for a smoother visual integration.
33 Citations
20 Claims
-
1. A computing device comprising:
-
one or more processing units; and one or more computer-readable storage media comprising computer-executable instructions which, when executed by the one or more processing units, cause the computing device to; obtain a sample of a video image feed; select one or more pre-existing still images whose image scope comprises at least a portion of an image scope of the obtained sample of the video image feed, the selecting being informed by location metadata associated with both the video image feed and the still images; derive an average image comprising those elements that remain static throughout the sample of the video image feed; derive a motion mask identifying areas in which elements move throughout the sample of the video image feed; identifying image features in the average image that are along the motion mask, having corresponding image features in the selected one or more still images; and derive transformation parameters to transform and align the video image feed such that the identified image features in the sample of the video image feed, after transformation and alignment, have a visual size and visual appearance equivalent to that of the corresponding image features in the selected one or more still images, and can be equivalently overlaid over the corresponding image features in the selected one or more still images. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computing device comprising:
-
one or more processing units; and one or more computer-readable storage media comprising computer-executable instructions which, when executed by the one or more processing units, cause the computing device to; transform and align a video image feed, while it is being received, utilizing transformation parameters; and generate a continuously updated amalgamated image comprising at least a portion of an image scope of the transformed and aligned video image feed overlaid over at least one pre-existing still image, which was selected based on location metadata associated with both the video image feed and the at least one pre-existing still image, such that the portion of the image scope of the transformed and aligned video image feed is equivalent to an image scope of that portion of the least one pre-existing still image over which the transformed and aligned video image feed is overlaid, wherein the transformed and aligned video image feed comprises image features having a visual size and appearance equivalent to that of corresponding images features in the least one pre-existing still image; wherein the transformation parameters are derived based on image features in an average image that are along a motion mask, the image features having corresponding image features in the at least one pre-existing still image, wherein the average image comprises those elements that remain static throughout a sample of the video image feed, and wherein further the motion mask identifies areas in which elements move through the sample of the video image feed. - View Dependent Claims (9, 10, 11, 12, 19)
-
-
13. A method for generating an amalgamated image comprising a video image feed visually integrated into a pre-existing still image, the method comprising the steps of:
-
obtaining a sample of the video image feed; selecting the pre-existing still image based upon its image scope comprising at least a portion of an image scope of the obtained sample of the video image feed, the selecting being informed by location metadata associated with both the video image feed and the still image; deriving an average image comprising those elements that remain static throughout the sample of the video image feed; deriving a motion mask identifying areas in which elements move throughout the sample of the video image feed; identifying image features in the average image that are along the motion mask, having corresponding image features in the pre-existing still image; and deriving transformation parameters to transform and align the video image feed such that the identified image features in the sample of the video image feed, after transformation and alignment, have a visual size and appearance equivalent to that of the corresponding image features in the pre-existing still image and can be equivalently overlaid over the corresponding image features in the selected still image. - View Dependent Claims (14, 15, 16, 17, 18, 20)
-
Specification