Method and system for counting people using depth sensor
First Claim
1. A method, comprising:
- obtaining a frame of depth data from a depth sensor, the depth sensor mounted to provide a top view of a scene;
discerning foreground objects from background objects from within the frame of depth data;
for a given foreground object, calculating expected relative head size, in pixels of the frame of depth data, of the given foreground object at a depth of the given foreground object, the expected relative head size calculated using a pre-determined head size, a width in pixel units of the frame of depth data, an angle the depth sensor covers, and a depth of the given foreground object determined using the frame of depth data obtained; and
determining if the given foreground object matches a single-scale reference model of a target object, the single-scale reference model of the target object determined based upon the expected relative head size at the depth of the foreground object,wherein the expected relative head size is calculated according to the following equation;
3 Assignments
0 Petitions
Accused Products
Abstract
A sensor system according to an embodiment of the invention may process depth data and visible light data for a more accurate detection. Depth data assists where visible light images are susceptible to false positives. Visible light images (or video) may similarly enhance conclusions drawn from depth data alone. Detections may be object-based or defined with the context of a target object. Depending on the target object, the types of detections may vary to include motion and behavior. Applications of the described sensor system include motion guided interfaces where users may interact with one or more systems through gestures. The sensor system described may also be applied to counting systems, surveillance systems, polling systems, retail store analytics, or the like.
-
Citations
19 Claims
-
1. A method, comprising:
-
obtaining a frame of depth data from a depth sensor, the depth sensor mounted to provide a top view of a scene; discerning foreground objects from background objects from within the frame of depth data; for a given foreground object, calculating expected relative head size, in pixels of the frame of depth data, of the given foreground object at a depth of the given foreground object, the expected relative head size calculated using a pre-determined head size, a width in pixel units of the frame of depth data, an angle the depth sensor covers, and a depth of the given foreground object determined using the frame of depth data obtained; and determining if the given foreground object matches a single-scale reference model of a target object, the single-scale reference model of the target object determined based upon the expected relative head size at the depth of the foreground object, wherein the expected relative head size is calculated according to the following equation; - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A non-transitory computer readable medium having program instructions stored thereon, the program instructions being executable by a processor and, when loaded and executed by the processor, causing the processor to:
-
obtain a frame of depth data from a depth sensor, the depth sensor mounted to provide a top view of a scene; identify a given foreground object, from among multiple foreground objects and background objects, from within the frame of depth data, for a given foreground object, calculate expected relative head size, in pixels of the frame of depth data, of the given foreground object at a depth of the given foreground object, the expected relative head size calculated using a pre-determined head size, a width in pixel units of the frame of depth data, an angle the depth sensor covers, and a depth of the given foreground object determined using the frame of depth data obtained; determine if the given foreground object matches a single-scale reference model of a target object, the single-scale reference model of the target object determined based upon the expected relative head size at the depth of the foreground object; apply a machine learning application to generate a classification determination of the foreground object; and maintain a classification determination count, wherein the expected relative head size is calculated according to the following equation;
-
-
11. A system, comprising:
-
a depth sensor configured to image depth of objects to acquire depth data; a memory, in communication with the depth sensor, configured to store the depth data; a processor, in communication with the memory, configured to execute program instructions that cause the processor to; obtain a frame of depth data from the depth sensor, the depth sensor mounted to provide a top view of a scene; discern foreground objects from background objects from within the frame of depth data; for a given foreground object, calculate expected relative head size, in pixels of the frame of depth data, of the given foreground object at a depth of the given foreground object, the expected relative head size calculated using a predetermined head size, a width in pixel units of the frame of depth data, an angle the depth sensor covers, and a depth of the given foreground object determined using the frame of depth data obtained; and determine if the given foreground object matches a single-scale reference model of a target object, the single-scale reference model of the target object determined based upon the expected relative head size at the depth of the foreground object, wherein the expected relative head size is calculated according to the following equation; - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19)
-
Specification