Systems and methods for generating a motion attention model
First Claim
Patent Images
1. A method for generating a motion attention model of a video data sequence, the method comprising:
- generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij in motion fields extracted from the video data sequence;
accumulating brightness of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence; and
wherein I is a normalized magnitude of a motion vector that is calculated according to;
wherein (dxi,j, dyi,j) denotes two components of the motion vector in motion field, and MaxMag is the maximum magnitude of motion vectors.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods to generate a motion attention model of a video data sequence are described. In one aspect, a motion saliency map B is generated to precisely indicate motion attention areas for each frame in the video data sequence. The motion saliency maps are each based on intensity I, spatial coherence Cs, and temporal coherence Ct values. These values are extracted from each block or pixel in motion fields that are extracted from the video data sequence. Brightness values of detected motion attention areas in each frame are accumulated to generate, with respect to time, the motion attention model.
84 Citations
27 Claims
-
1. A method for generating a motion attention model of a video data sequence, the method comprising:
-
generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij in motion fields extracted from the video data sequence; accumulating brightness of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence; and wherein I is a normalized magnitude of a motion vector that is calculated according to; wherein (dxi,j, dyi,j) denotes two components of the motion vector in motion field, and MaxMag is the maximum magnitude of motion vectors. - View Dependent Claims (5, 6, 7)
Bq being brightness of a block in the motion saliency map, Λ
being the set of detected motion attention areas, Ω
r denoting a set of blocks in each detected motion attention area, NMB being a number of blocks in a motion field; andwherein an Mmotion value for each frame in the video data sequence represents a continuous motion attention curve with respect to time.
-
-
2. A method for generating a motion attention model of a video data sequence, the method comprising:
-
generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij in motion fields extracted from the video data sequence; accumulating brightness of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence; and wherein Cs is calculated with respect to spatial widow w as follows; wherein SHwi,j(t) is a spatial phase histogram whose probability distribution function is ps(t), and n is a number of histogram bins.
-
-
3. A method for generating a motion attention model of a video data sequence, the method comprising:
-
generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij in motion fields extracted from the video data sequence; accumulating brightness of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence; and wherein Ct is calculated with respect to a sliding window of size L frames along time t axis as; wherein THLi,j(t) is a temporal phase histogram whose probability distribution function is pt(t), and n is a number of histogram bins.
-
-
4. A method for generating a motion attention model of a video data sequence, the method comprising:
-
generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij in motion fields extracted from the video data sequence; accumulating brightness of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence; and wherein the method further comprises instructions for generating the motion saliency map B according to
B=I×
Ct×
(1I×
Cs).
-
-
8. A computer-readable medium for generating a motion attention model of a video data sequence, the computer-readable medium comprising computer-program instructions executable by a processor for:
-
generating a motion saliency map B to precisely indicate motion attention areas for each frame in the video data sequence, the motion saliency map being based on intensity I, spatial coherence Cs, and temporal coherence Ct values from each location of a block MBij in motion fields extracted from the video data sequence; and accumulating brightness of detected motion attention areas to generate, with respect to time, a motion attention model for the video data sequence.
-
-
9. A computer-readable medium for generating a motion attention model of a video data sequence, the computer-readable medium comprising computer-program instructions executable by a processor for:
-
extracting a motion field between a current frame and a next frame of the video data sequence; determining, at each location of a block MBij, intensity I, spatial coherence Cs, and temporal coherence Ct values from the motion field; integrating intensity I, spatial coherence Cs, and temporal coherence Ct to generate a motion saliency map B, the motion saliency map precisely indicating motion attention areas in the motion field; and accumulating brightness of detected motion attention areas to indicate a motion attention degree for the current frame. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A computing device for generating a motion attention model of a video data sequence that includes multiple frames, the computing device comprising:
-
a processor; and a memory coupled to the processor, the memory comprising computer-program instructions executable by the processor for, for each frame in the video data sequence; (a) extracting a motion field between a current frame and a next frame of the video data sequence; (b) determining, for each block MBij represented by the motion field, intensity I, spatial coherence Cs, and temporal coherence Ct values to generate a motion saliency map B, the motion saliency map precisely indicating motion attention areas for each frame in the video data sequence; and accumulating brightness of detected motion attention areas to generate the motion attention model with respect to time. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27)
-
Specification