VIDEO SEGMENTATION USING STATISTICAL PIXEL MODELING
First Claim
1. Circuitry adapted to perform a two-pass method of video segmentation for differentiating between foreground and background portions of video, the method comprising the steps of:
- obtaining a frame sequence from an input video stream;
executing a first-pass method for each frame of the frame sequence, the first-pass method comprising the steps of;
aligning the frame with a scene model; and
updating a background statistical model; and
finalizing the background statistical model;
executing a second-pass method for each frame of the frame sequence, the second-pass method comprising the steps of;
labeling each region of the frame; and
performing spatial/temporal filtering of the regions of the frame.
6 Assignments
0 Petitions
Accused Products
Abstract
A method for segmenting video data into foreground and background (324) portions utilizes statistical modeling of the pixels Λ statistical model of the background is built for each pixel, and each pixel in an incoming video frame is compared (326) with the background statistical model for that pixel. Pixels are determined to be foreground or background based on the comparisons. The method for segmenting video data may be further incorporated into a method for implementing an intelligent video surveillance system The method for segmenting video data may be implemented in hardware.
-
Citations
76 Claims
-
1. Circuitry adapted to perform a two-pass method of video segmentation for differentiating between foreground and background portions of video, the method comprising the steps of:
-
obtaining a frame sequence from an input video stream; executing a first-pass method for each frame of the frame sequence, the first-pass method comprising the steps of; aligning the frame with a scene model; and updating a background statistical model; and finalizing the background statistical model; executing a second-pass method for each frame of the frame sequence, the second-pass method comprising the steps of; labeling each region of the frame; and performing spatial/temporal filtering of the regions of the frame. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 66)
-
-
14. Circuitry adapted to perform a one-pass method of video segmentation for differentiating between foreground and background portions of video, the method comprising the steps of:
-
obtaining a frame sequence from a video stream; and for each frame in the frame sequence, performing the following steps; aligning the frame with a scene model; building a background statistical model; labeling the regions of the frame; and performing spatial/temporal filtering. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32)
-
-
33. Circuitry adapted to perform a one-pass method of video segmentation for differentiating between foreground and background portions of video, the method comprising the steps of:
-
obtaining a frame sequence from a video stream; and for each frame in the frame sequence, performing the following steps; aligning the frame with a scene model; building a background statistical model and a secondary statistical model; labeling the regions of the frame; and performing spatial/temporal filtering. - View Dependent Claims (34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52)
-
-
53. Circuitry adapted to perform a two-pass method of video segmentation for differentiating between foreground and background portions of video, the method comprising the steps of:
-
obtaining a frame sequence from an input video stream; executing a first-pass method for each frame of the frame sequence, the first-pass method comprising the steps of; aligning the frame with a scene model; and updating a background statistical model, the background statistical model comprising values corresponding to regions of frames of the frame sequence and variances for the regions; finalizing the background statistical model; and executing a second-pass method for each frame of the frame sequence, the second-pass method comprising the steps of; labeling each region of the frame; and performing spatial/temporal filtering of the regions of the frame.
-
-
54. Circuitry adapted to perform a one-pass method of video segmentation for differentiating between foreground and background portions of video, the method comprising the steps of:
-
obtaining a frame sequence from a video stream; and for each frame in the frame sequence, performing the following steps; aligning the frame with a scene model; building a background statistical model, the background statistical model comprising values corresponding to regions of frames of the frame sequence and variances for the regions; labeling the regions of the frame; and performing spatial/temporal filtering.
-
-
55. A one-pass method of video segmentation, for differentiating between foreground and background portions of video, comprising the steps of:
-
obtaining a real-time video stream; and for each frame in the real-time frame stream, performing the following steps; labeling pixels in the frame; performing spatial/temporal filtering; updating a background statistical model, after the pixels are labeled; and building and/or updating at least one foreground statistical model, after the pixels are labeled. - View Dependent Claims (56, 57, 58, 59, 60, 61, 62, 63, 64, 65)
-
-
67. Circuitry adapted to perform a one-pass method of video segmentation for differentiating between foreground and background portions of video, the method comprising the steps of:
-
obtaining a real-time video stream; and for each frame in the real-time frame stream, performing the following steps; labeling pixels in the frame; performing spatial/temporal filtering; updating a background statistical model, after the pixels are labeled; and building and/or updating at least one foreground statistical model, after the pixels are labeled. - View Dependent Claims (68, 69, 70, 71, 72, 73, 74, 75, 76)
-
Specification