VIDEO DATA STORAGE, SEARCH, AND RETRIEVAL USING META-DATA AND ATTRIBUTE DATA IN A VIDEO SURVEILLANCE SYSTEM
4 Assignments
0 Petitions
Accused Products
Abstract
One embodiment is a method of storing video data from a video surveillance system having one or more cameras. Video data is captured from one or more surveillance cameras. Meta-data is automatically generated by performing video analysis on the captured video data from the surveillance cameras. A human operator may manually enter additional meta-data. Attribute data and associated weights, representing information about the relevance of the meta-data, is received. The video data is stored in a hierarchical video storage area; the meta-data, indexed by date and time stamp to the video data, is stored in a meta-data storage area; and the attribute data is stored in an attribute storage area. One or more alerts may be issued based on the past and present meta-data. The video data is secured by encrypting and storing the video data remotely, and audit trails are generated about who and when viewed the video data.
215 Citations
44 Claims
-
1-22. -22. (canceled)
-
23. A method of storing video data, associated meta-data, and associated attribute weights from a video surveillance system, the method comprising:
-
capturing video data from one or more surveillance cameras; generating meta-data by performing video analysis on the video data from the surveillance cameras, the meta-data representing events detected in the video data; determining attribute weights, representing information about the relevance of the meta-data; generating intersections of two or more subsets of the meta-data to generate intersection meta-data; determining attribute weights associated with the intersection meta-data by multiplying the attribute weights for each subset of meta-data; generating unions of two or more subsets of the meta-data to generate union meta-data; determining attribute weights associated with the union meta-data by adding the attribute weights for each subset of meta-data and subtracting a multiple of the attribute weights of each subset of meta-data; changing the attribute weights based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights; storing the video data in a video storage area; storing the meta-data, indexed by date and time stamp to the video data, in a meta-data storage area; and storing the attribute weights in an attribute storage area, wherein attribute weights for the intersection meta-data is calculated using the equation;
W(M1∩
M2)=W(M1)•
W(M2),wherein attribute weights for the union meta-data is calculated using the equation;
W(M1∪
M2)=W(M1)+W(M2)−
W(M1)•
W(M2), andwherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.- View Dependent Claims (24, 25, 26, 27, 28, 29, 30, 31)
where wj are future attribute weights, wi are past attribute weights, and en are external event weights.
-
-
25. The method of claim 23, further comprising:
-
receiving video tips from one or more anonymous sources, the video tips being short video clips captured by citizens; generating tip meta-data based on the video tips, the tip meta-data representing events detected in the video tips; and determining tip attribute weights for the tip meta-data, representing information about the relevance of the tip meta-data.
-
-
26. The method of claim 23, further comprising:
-
providing additional meta-data generated by a human operator; and storing the additional human generated meta-data, indexed to the video data by date and time stamp, in the meta-data storage module.
-
-
27. The method of claim 23, further comprising:
-
retrieving historical meta-data from the meta-data storage module; evaluating a set of rules based on the historical meta-data and the generated meta-data; and performing one or more actions based on the evaluation of the set of rules.
-
-
28. The method of claim 23, wherein the video storage module is a hierarchical storage module that archives the video data based at least on meta-data and attribute weights associated with the video data.
-
29. The method of claim 23, further comprising:
storing access privileges for the video data, the meta-data, and the attribute weights.
-
30. The method of claim 23, further comprising:
encrypting the captured video data before storing the video data.
-
31. The method of claim 23, wherein the video data is stored off-site.
-
32. A video surveillance system, comprising:
-
one or more surveillance cameras for capturing video data; one or more video storage areas for storing video data; a meta-data storage area for storing meta-data; an attribute storage area for storing attribute weights; and a processor, the processor coupled to the video storage areas, the meta-data storage area, and the attribute storage area, the processor adapted to execute program code to; capture video data from one or more surveillance cameras; generate meta-data by performing video analysis on the video data from the surveillance cameras, the meta-data representing events detected in the video data; determine attribute weights, representing information about the relevance of the meta-data; generate intersections of two or more subsets of the meta-data to generate intersection meta-data; determine attribute weights associated with the intersection meta-data by multiplying the attribute weights for each subset of meta-data; generate unions of two or more subsets of the meta-data to generate union meta-data; determine attribute weights associated with the union meta-data by adding the attribute weights for each subset of meta-data and subtracting a multiple of the attribute weights of each subset of meta-data; change the attribute weights based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights; store the video data in a video storage area; store the meta-data, indexed by date and time stamp to the video data, in a meta-data storage area; and store the attribute weights in an attribute storage area, wherein attribute weights for the intersection meta-data is calculated using the equation;
W(M1∩
M2)=W(M1)•
W(M2),wherein attribute weights for the union meta-data is calculated using the equation;
W( M1∪
M2)=W(M1)+W(M2)−
W(M1)•
W(M2), andwherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.- View Dependent Claims (33, 34, 35, 36, 37, 38)
where wj are future attribute weights, wi are past attribute weights, and en are external event weights.
-
-
34. The apparatus of claim 32, wherein the processor further comprises program code to:
-
receive video tips from one or more sources, the video tips being short video clips captured by citizens; generate tip meta-data based on the video tips; determine tip attribute weights for the tip meta-data; and store the video tips in the video storage areas;
-
-
35. The apparatus of claim 32, wherein the processor further comprises program code to:
-
provide additional meta-data generated by a human operator; and store the additional human generated meta-data, indexed to the video data by date and time stamp, in the meta-data storage module.
-
-
36. The apparatus of claim 32, wherein the processor further comprises program code to:
-
retrieve historical meta-data from the meta-data storage module; evaluate a set of rules based on the historical meta-data and the generated meta-data; and perform one or more actions based on the evaluation of the set of rules.
-
-
37. The apparatus of claim 32, further comprising:
a hierarchical video storage module adapted to archive the video data based at least on meta-data and attribute weights associated with the video data.
-
38. The apparatus of claim 32, further comprising:
a fiber optic line to an off-site location for archiving the video data off-site.
-
39. A method of searching and retrieving video data from a video surveillance system, the method comprising:
-
entering a search criteria; searching meta-data associated with the video data, the meta-data generated by one or more video detection components and indexed to the video data; retrieving meta-data matching the search criteria from a meta-data module; retrieving video data indexed by the meta-data from a video storage module; and retrieving attribute weights associated with the meta-data, the attribute weights representing reliability of the meta-data, wherein attribute weights for intersection meta-data of two sub-sets of meta-data is calculated using the equation;
W( M1∩
M2)=W(M1)•
W(M2)wherein attribute weights for union meta-data of two sub-sets of meta-data is calculated using the equation
W( M1∪
M2)=W(M1)+W(M2)−
W(M1)•
W(M2),wherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.- View Dependent Claims (40, 41)
-
-
42. An apparatus for storing video data, associated meta-data, and associated attribute weights from a video surveillance system, the apparatus comprising:
-
means for capturing video data from one or more surveillance cameras; means for generating meta-data by performing video analysis on the video data from the surveillance cameras, the meta-data representing events detected in the video data; means for determining attribute weights, representing information about the relevance of the meta-data; means for generating intersections of two or more subsets of the meta-data to generate intersection meta-data; means for determining attribute weights associated with the intersection meta-data by multiplying the attribute weights for each subset of meta-data; means for generating unions of two or more subsets of the meta-data to generate union meta-data; means for determining attribute weights associated with the union meta-data by adding the attribute weights for each subset of meta-data and subtracting a multiple of the attribute weights of each subset of meta-data; means for changing the attribute weights based on external events by computing future attribute weights from past attribute weights by composing past attribute weights with external event weights; means for storing the video data in a video storage area; means for storing the meta-data, indexed by date and time stamp to the video data, in a meta-data storage area; and means for storing the attribute weights in an attribute storage area, wherein attribute weights for the intersection meta-data is calculated using the equation;
W(M1∩
M2)=W(M1)•
W(M2),wherein attribute weights for the union meta-data is calculated using the equation;
W( M1∪
M2)=W(M1)+W(M2)−
W(M1)•
W(M2), andwherein M1 and M2 are two subsets of meta-data, W(M1) is an attribute weight associated with subset M1, W(M2) is an attribute weight associated with subset M2, W(M1∩
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2, and W(M1∪
M2) is a calculated attribute weight associated with the intersection meta-data of subset M1 and subset M2.- View Dependent Claims (43, 44)
where wj are future attribute weights, wi are past attribute weights, and en are external event weights.
-
-
44. The apparatus of claim 42, further comprising:
-
means for receiving video tips from one or more anonymous sources, the video tips being short video clips captured by citizens; means for generating tip meta-data based on the video tips, the tip meta-data representing events detected in the video tips; and means for determining tip attribute weights for the tip meta-data, representing information about the relevance of the tip meta-data.
-
Specification