Generating a motion attention model
First Claim
1. A method implemented at least in part by a computing device for generating a motion attention model of a video data sequence, the method comprising:
- generating, by the computing device, a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij, in motion fields extracted from the video data sequence; and
accumulating brightness associated with each of the intensity I, the spatial coherence Cs, and the temporal coherence Ct values of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence, wherein;
the intensity I values are motion intensity values each represented on a motion intensity map by a brightness associated with each location of the block MBij;
the spatial coherence Cs values are each represented on a spatial coherence map by a brightness associated with each location of the block MBij;
the temporal coherence Ct values are each represented on a temporal coherence map by a brightness associated with each location of the block MBij; and
a combination is represented on the motion saliency map B by a brightness of the detected motion attention areas associated with each location of the block MBij.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods to generate a motion attention model of a video data sequence are described. In one aspect, a motion saliency map B is generated to precisely indicate motion attention areas for each frame in the video data sequence. The motion saliency maps are each based on intensity I, spatial coherence Cs, and temporal coherence Ct values. These values are extracted from each block or pixel in motion fields that are extracted from the video data sequence. Brightness values of detected motion attention areas in each frame are accumulated to generate, with respect to time, the motion attention model.
-
Citations
17 Claims
-
1. A method implemented at least in part by a computing device for generating a motion attention model of a video data sequence, the method comprising:
-
generating, by the computing device, a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij, in motion fields extracted from the video data sequence; and accumulating brightness associated with each of the intensity I, the spatial coherence Cs, and the temporal coherence Ct values of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence, wherein; the intensity I values are motion intensity values each represented on a motion intensity map by a brightness associated with each location of the block MBij; the spatial coherence Cs values are each represented on a spatial coherence map by a brightness associated with each location of the block MBij; the temporal coherence Ct values are each represented on a temporal coherence map by a brightness associated with each location of the block MBij; and a combination is represented on the motion saliency map B by a brightness of the detected motion attention areas associated with each location of the block MBij. - View Dependent Claims (2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
4. A computing device for generating a motion attention model of a video data sequence, the computing device comprising:
a motion attention modeling module for; generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij in motion fields extracted from the video data sequence, wherein the saliency map B is calculated according to B=I×
Ct×
(1−
I×
Cs); andaccumulating brightness represented by the intensity I, the spatial coherence Cs and the temporal coherence Ct values of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence.
-
16. A computer-readable media storing executable instructions that, when executed by one or more processors, perform a method comprising:
-
generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I values being represented by a brightness associated with each location on a motion intensity map, spatial coherence Cs values being represented by a brightness associated with each location on a spatial coherence map and temporal coherence Ct values being represented by a brightness associated with each location on a temporal coherence map in motion fields extracted from the video data sequence; calculating the motion intensity I values, the spatial coherence Cs values and the temporal coherence Ct values from each location of blocks in the motion fields extracted from the video data sequence; and accumulating brightness from the motion intensity map, the spatial coherence map and the temporal coherence map of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence. - View Dependent Claims (17)
-
Specification