BEHAVIOR AND PATTERN ANALYSIS USING MULTIPLE CATEGORY LEARNING
First Claim
1. A video processing system comprising a configuration to:
- receive first training video samples from a plurality of video sensing devices, the first training video samples comprising substantially similar subject matter;
generate a first training probability density function using features extracted from the first training video samples;
receive second training video samples from the plurality of video sensing devices, the second training video samples comprising insubstantially similar subject matter; and
generate a second training probability density function using features extracted from the second training video samples.
1 Assignment
0 Petitions
Accused Products
Abstract
A video processing system is configured to receive training video samples from a plurality of video sensing devices. The training video samples are sets of pair video samples. These pair video samples can include both substantially similar subject matter and different subject matter. In the first step, there is a patch pool sampled from videos, and the system select patches with more saliency. The saliency is represented by the conditional probability density function of the similar subject and the conditional probability of the different subject. During the testing phase, the system applies the selected patches from the training phase, and returns the matched subjects.
-
Citations
20 Claims
-
1. A video processing system comprising a configuration to:
-
receive first training video samples from a plurality of video sensing devices, the first training video samples comprising substantially similar subject matter; generate a first training probability density function using features extracted from the first training video samples; receive second training video samples from the plurality of video sensing devices, the second training video samples comprising insubstantially similar subject matter; and generate a second training probability density function using features extracted from the second training video samples. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A video processing system comprising a configuration to:
-
receive first training video samples, the first training video samples captured by a plurality of video sensing devices, each video sensing device representing a different view of a field of view, each first training video sample comprising a first video sequence and a second video sequence, the first video sequence and the second video sequence comprising substantially similar subject matter captured by a single video sensing device of the plurality of video sensing devices; identify a plurality of sub-images in each frame of the first video sequence and the second video sequence, each sub-image in the first video sequence having a corresponding sub-image in the second video sequence; extract features from each of the sub-images; and generate a first training probability density function for each sub-image and corresponding sub-image as a function of the extracted features; and wherein the video processing system further comprises a configuration to; receive second training video samples, the second training video samples captured by the plurality of video sensing devices, each second training video sample comprising a first video sequence and a second video sequence, the first video sequence and the second video sequence of the second training video sample comprising insubstantially similar subject matter captured by a single video sensing device of the plurality of video sensing devices; identify a plurality of sub-images in each frame of the first and second video sequences of the second training video sample, each sub-image in the first video sequence of the second training video sample having a corresponding sub-image in the second video sequence of the second training video sample; extract features from each of the sub-images of the second training video sample; and generate a second training probability density function for each sub-image and corresponding sub-image as a function of the extracted features of the second training video samples. - View Dependent Claims (13, 14, 15, 16, 17)
-
-
18. An image processing system comprising a configuration to:
-
receive first training images from a plurality of video sensing devices, the first training images comprising substantially similar subject matter; identify a plurality of sub-images in each first training image; generate a first training probability density function using features extracted from the first training images; receive second training images from the plurality of video sensing devices, the second training images comprising insubstantially similar subject matter; identify a plurality of sub-images in each second training image, each sub-image in the second training image having a corresponding sub-image in the first training images; and generate a second training probability density function using features extracted from the second training images; wherein the first probability density function is generated by; estimating the distance between features for each sub-image of the similar subject; wherein the second probability density function is generated by; estimating the distance between features for each sub-image of the insubstantially similar subject. - View Dependent Claims (19, 20)
-
Specification