Video tripwire
DCFirst Claim
Patent Images
1. A video tripwire system comprising:
- a sensing device producing video output; and
a computer system, including a user interface, for performing calibration and for gathering and processing data based on video output received from the sensing device, the user interface comprising input means and output means, wherein the computer system displays processed data, and wherein the computer system includes software permitting a user to enter at least one virtual tripwire.
8 Assignments
Litigations
2 Petitions
Reexamination
Accused Products
Abstract
A method for implementing a video tripwire includes steps of calibrating a sensing device to determine sensing device parameters for use by the system; initializing the system, including entering at least one virtual tripwire; obtaining data from the sensing device; analyzing the data obtained from the sensing device to determine if the at least one virtual tripwire has been crossed; and triggering a response to a virtual tripwire crossing.
159 Citations
37 Claims
-
1. A video tripwire system comprising:
-
a sensing device producing video output; and
a computer system, including a user interface, for performing calibration and for gathering and processing data based on video output received from the sensing device, the user interface comprising input means and output means, wherein the computer system displays processed data, and wherein the computer system includes software permitting a user to enter at least one virtual tripwire. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
means for transmitting the video output from the sensing device;
a communication medium on which the video output is transmitted by the means for transmitting; and
means for receiving the video output from the communication medium.
-
-
3. The video tripwire system of claim 2, wherein the communication medium is a cable.
-
4. The video tripwire system of claim 2, wherein the communication medium includes a communication network.
-
5. The video tripwire system of claim 1, wherein said output means includes at least one of means for communicating a visible alarm and means for communicating an audible alarm.
-
6. The video tripwire system of claim 1, wherein said output means includes a visual display device.
-
7. The video tripwire system of claim 6, wherein said visual display device is capable of displaying at least one of video, one or more snapshots, and alphanumeric information.
-
8. The video tripwire system of claim 1, further comprising:
at least one memory device for storing at least one of video data and alphanumeric data.
-
9. The video tripwire system of claim 1, wherein said sensing device comprises at least one of a video camera, an infrared camera, a sonographic device, and a thermal imaging device.
-
10. The video tripwire system of claim 1, further comprising:
-
at least one additional sensing device producing video output, wherein said computer system further receives and processes the video output of the at least one additional sensing device.
-
-
11. A method of implementing a video tripwire system comprising the steps of:
-
calibrating a sensing device to determine sensing device parameters for use by the system;
initializing the system, including entering at least one virtual tripwire;
obtaining data from the sensing device;
analyzing the data obtained from the sensing device to determine if the at least one virtual tripwire has been crossed; and
triggering a response to a virtual tripwire crossing. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37)
entering parameters manually, by a user;
generating visual feedback to the user; and
permitting the user to re-enter the parameters if the appearance of the visual feedback is not acceptable to the user.
-
-
13. The method of claim 11, wherein said step of calibrating comprises the steps of:
-
having a person move through a field of view of the sensing device;
segmenting out the moving person;
using the size of the person in different regions of the field of view to determine parameters;
providing visual feedback to a user; and
allowing for adjustment of the parameters if the appearance of the visual feedback is not acceptable to the user.
-
-
14. The method of claim 13, wherein said step of allowing for adjustment comprises the step of permitting the user to adjust the parameters manually.
-
15. The method of claim 13, wherein said step of allowing for adjustment comprises the step of re-starting the step of calibrating.
-
16. The method of claim 13, wherein said step of allowing for adjustment comprises the step of permitting the user to choose between either adjusting the parameters manually or re-starting the step of calibrating.
-
17. The method of claim 13, wherein said step of segmenting comprises the steps of:
-
performing pixel-level background modeling;
performing foreground detection and tracking; and
analyzing foreground objects.
-
-
18. The method of claim 11, wherein said step of calibrating comprises the steps of:
-
gathering video information over a period of time using the sensing device;
segmenting out objects from the video information;
analyzing the segmented-out objects to determine the average size of a person in various regions of a video image corresponding to a field of view of the sensing device; and
using the average sizes of a person in the various regions to determine parameters.
-
-
19. The method of claim 18, wherein said step of segmenting comprises the steps of:
-
performing pixel-level background modeling;
performing foreground detection and tracking; and
analyzing foreground objects.
-
-
20. The method of claim 18, wherein said step of analyzing comprises the steps of:
-
determining insalient regions of the video image; and
forming histograms of foreground objects detected in the non-insalient regions of the video image.
-
-
21. The method of claim 20, wherein the determination of the average size of a person in a particular region of the video image is made only if a number of foreground objects detected in that region exceeds a predetermined number.
-
22. The method of claim 20, wherein a highest peak in a histogram is taken to correspond to a single person.
-
23. The method of claim 18, further comprising the step of:
entering, by the user, one or more time windows to be used for calibrating.
-
24. The method of claim 11, wherein said step of initializing the system further includes the step of selecting at least one logging option.
-
25. The method of claim 11, wherein said step of analyzing comprises the step of determining if a detected object overlaps the at least one virtual tripwire.
-
26. The method of claim 25, wherein said step of calibrating comprises a step of:
-
performing object segmentation; and
wherein said detected object is detected based on said step of performing object segmentation.
-
-
27. The method of claim 26, wherein said step of performing object segmentation comprises the steps of:
-
performing pixel-level background modeling;
performing foreground detection and tracking; and
analyzing foreground objects.
-
-
28. The method of claim 25, wherein said step of analyzing further comprises the step of:
-
performing object segmentation, wherein said detected object is detected based on said step of performing object segmentation.
-
-
29. The method of claim 28, wherein said step of performing object segmentation comprises the steps of:
-
performing pixel-level background modeling;
performing foreground detection and tracking; and
analyzing foreground objects.
-
-
30. The method of claim 25, wherein the step of analyzing further comprises the step of:
if said step of determining if a detected object overlaps the at least one virtual tripwire returns a positive result, determining if a direction of crossing matches a direction of crossing entered by a user.
-
31. The method of claim 30, wherein the step of analyzing further comprises the step of:
if said step of determining if a direction of crossing matches a direction of crossing entered by a user returns a positive result, making at least one additional inquiry as to the nature of the detected object.
-
32. The method of claim 25, wherein the step of analyzing further comprises the step of:
if said step of determining if a detected object overlaps the at least one virtual tripwire returns a positive result, making at least one additional inquiry as to the nature of the detected object.
-
33. The method of claim 11, wherein said step of triggering a response comprises at least one of activating an audio alarm;
- activating a visual alarm;
taking a snapshot; and
recording video.
- activating a visual alarm;
-
34. A method of tailgating detection including the method of claim 11 and further comprising the steps of:
-
detecting that a person is entering an area of interest;
beginning surveillance of the area of interest in response to said detecting step;
determining if the number of people entering the area of interest is greater than a permissible number; and
if the determining step returns a positive result, triggering a response.
-
-
35. A computer-readable medium containing software implementing the method of claim 11.
-
36. A computer system executing software implementing the method of claim 11.
-
37. A video tripwire system comprising:
-
a sensing device providing output data; and
a computer system receiving the output data and comprising;
a user interface;
at least one processor; and
a computer-readable medium containing software implementing the method of claim 11.
-
Specification