Stereo camera intrusion detection system
-
0Associated
Cases -
0Associated
Defendants -
0Accused
Products -
4Forward
Citations -
0
Petitions -
3
Assignments
First Claim
1. An intrusion detection system comprising:
- a first camera configured to generate first images of a monitored area;
a second camera configured to generate second images of the monitored area;
a detection device configured to compare the first images with a background image of the monitored area, the detection device marking differences between the first images and the background image as a potential intruder; and
a tracking device configured to evaluate each of the first images relative to each of the second images to determine three-dimensional characteristics associated with the potential intruder, the tracking device comprising a threshold comparator configured to allow a user of the intrusion detection system to designate at least one three-dimensional space within the monitored area, the at least one three-dimensional space being one of a threshold space and a null zone, the tracking device activating the indicator in response to one of motion into and motion toward the threshold space, and the tracking device not activating the indicator in response to differences between the first images and the background image in the null zone.
3 Assignments
0 Petitions

Accused Products

Abstract
A system and method is provided for an intrusion detection system. The intrusion detection system comprises a first camera configured to acquire first visual images of a monitored area and a second camera configured to acquire second visual images of the monitored area. The intrusion detection system also comprises a detection device configured to compare the first images with a background image of the monitored area. The detection device can mark differences between the first images and the background image as a potential intruder. The intrusion detection system further comprises a tracking device configured to evaluate each of the first images relative to each of the second images to determine three-dimensional characteristics associated with the potential intruder.
91 Citations
View as Search Results
SYSTEM AND METHOD FOR LOW COMPLEXITY CHANGE DETECTION IN A SEQUENCE OF IMAGES THROUGH BACKGROUND ESTIMATION | ||
Patent #
US 20140105498A1
Filed 10/11/2012
|
Current Assignee
Ittiam Systems Pte Ltd
|
Sponsoring Entity
Ittiam Systems Pte Ltd
|
System and method for low complexity change detection in a sequence of images through background estimation | ||
Patent #
US 8,995,718 B2
Filed 10/11/2012
|
Current Assignee
Ittiam Systems Pte Ltd
|
Sponsoring Entity
Ittiam Systems Pte Ltd
|
Methods and apparatus for image processing | ||
Patent #
US 10,389,949 B2
Filed 10/14/2016
|
Current Assignee
SZ DJI Technology Co. Ltd. dba DJI
|
Sponsoring Entity
SZ DJI Technology Co. Ltd. dba DJI
|
Systems and methods for determining and verifying a presence of an object or an intruder in a secured area | ||
Patent #
US 10,438,464 B1
Filed 06/06/2018
|
Current Assignee
Ademco Inc.
|
Sponsoring Entity
Ademco Inc.
|
Method and apparatus for stereo, multi-camera tracking and RF and video track fusion | ||
Patent #
US 7,929,017 B2
Filed 07/28/2005
|
Current Assignee
SRI International Inc.
|
Sponsoring Entity
SRI International Inc.
|
Gesture recognition simulation system and method | ||
Patent #
US 7,701,439 B2
Filed 07/13/2006
|
Current Assignee
Northrop Grumman Systems Corporation
|
Sponsoring Entity
Northrop Grumman Systems Corporation
|
SYSTEM AND METHOD FOR GENERATING CORRESPONDENCE MAPPINGS USING INFRARED PATTERNS | ||
Patent #
US 20090015791A1
Filed 07/12/2007
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Sponsoring Entity
Hewlett-Packard Development Company L.P.
|
Hand-Gesture Recognition Method | ||
Patent #
US 20090103780A1
Filed 12/17/2008
|
Current Assignee
Northrop Grumman Systems Corporation
|
Sponsoring Entity
Northrop Grumman Systems Corporation
|
Gesture Recognition Light and Video Image Projector | ||
Patent #
US 20090115721A1
Filed 11/02/2007
|
Current Assignee
Northrop Grumman Systems Corporation
|
Sponsoring Entity
Northrop Grumman Systems Corporation
|
Gesture recognition interface system | ||
Patent #
US 20080013826A1
Filed 07/13/2006
|
Current Assignee
Northrop Grumman Systems Corporation
|
Sponsoring Entity
Northrop Grumman Systems Corporation
|
Computer vision based touch screen | ||
Patent #
US 20080150913A1
Filed 10/30/2007
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
-
|
Gesture Recognition Interface System with Vertical Display | ||
Patent #
US 20080244468A1
Filed 06/05/2008
|
Current Assignee
Northrop Grumman Systems Corporation
|
Sponsoring Entity
Northrop Grumman Systems Corporation
|
Camera assisted pen tablet | ||
Patent #
US 20070024590A1
Filed 05/19/2005
|
Current Assignee
Intellectual Ventures Assets 21 LLC
|
Sponsoring Entity
Intellectual Ventures Assets 21 LLC
|
Mobile video teleconferencing system and control method | ||
Patent #
US 20070064092A1
Filed 09/09/2005
|
Current Assignee
Sandbeg Roy, Sandberg Dan
|
Sponsoring Entity
Sandbeg Roy, Sandberg Dan
|
Interactive video display system | ||
Patent #
US 7,259,747 B2
Filed 05/28/2002
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
Reactrix Systems Incorporated
|
Gesture-based input interface system with shadow detection | ||
Patent #
US 6,624,833 B1
Filed 04/17/2000
|
Current Assignee
WSOU Investments LLC
|
Sponsoring Entity
Alcatel-Lucent USA Inc.
|
Video hand image-three-dimensional computer interface with multiple degrees of freedom | ||
Patent #
US 6,147,678 A
Filed 12/09/1998
|
Current Assignee
WSOU Investments LLC
|
Sponsoring Entity
Lucent Technologies Inc.
|
Method and apparatus for efficiently representing storing and accessing video information | ||
Patent #
US 6,956,573 B1
Filed 11/14/1997
|
Current Assignee
SRI International Inc.
|
Sponsoring Entity
Sarnoff Corporation
|
Video hand image three-dimensional computer interface | ||
Patent #
US 6,204,852 B1
Filed 12/09/1998
|
Current Assignee
Alcatel-Lucent USA Inc.
|
Sponsoring Entity
Lucent Technologies Inc.
|
Method for extracting features from an image using oriented filters | ||
Patent #
US 6,983,065 B1
Filed 12/28/2001
|
Current Assignee
Cognex Technology Investment Corporation
|
Sponsoring Entity
Cognex Technology Investment Corporation
|
Gesture-based computer interface | ||
Patent #
US 6,222,465 B1
Filed 12/09/1998
|
Current Assignee
Lucent Technologies Inc.
|
Sponsoring Entity
Lucent Technologies Inc.
|
Video computational shared drawing space | ||
Patent #
US 5,239,373 A
Filed 12/26/1990
|
Current Assignee
XEROX CORPORATION STAMFORD COUNTY OF FAIRFIELD CONNECTICUT A CORP. OF NEW YORK
|
Sponsoring Entity
XEROX CORPORATION STAMFORD COUNTY OF FAIRFIELD CONNECTICUT A CORP. OF NEW YORK
|
Pointing position detection device, presentation system, and method, and computer-readable medium | ||
Patent #
US 6,512,507 B1
Filed 03/26/1999
|
Current Assignee
Seiko Epson Corporation
|
Sponsoring Entity
Seiko Epson Corporation
|
Recognizing gestures and using gestures for interacting with software applications | ||
Patent #
US 20060010400A1
Filed 06/28/2004
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
Microsoft Technology Licensing LLC
|
Surface UI for gesture-based interaction | ||
Patent #
US 20060036944A1
Filed 08/10/2004
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
Microsoft Technology Licensing LLC
|
Analyzing system for the detection of reducing and oxidizing gases in a carrier gas with a metal-oxide-semiconductor sensor arrangement | ||
Patent #
US 20060052953A1
Filed 12/23/2003
|
Current Assignee
Sociedad Espanola de Carburos Metalicos SA
|
Sponsoring Entity
Sociedad Espanola de Carburos Metalicos SA
|
Method and system for communicating through shared media | ||
Patent #
US 20060092178A1
Filed 10/29/2004
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Sponsoring Entity
Hewlett-Packard Development Company L.P.
|
Man machine interfaces and applications | ||
Patent #
US 7,042,440 B2
Filed 07/21/2003
|
Current Assignee
Peter Smith, Timothy R. Pryor
|
Sponsoring Entity
Peter Smith, Timothy R. Pryor
|
Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter | ||
Patent #
US 20060125799A1
Filed 11/23/2005
|
Current Assignee
Qualcomm Inc.
|
Sponsoring Entity
Qualcomm Inc.
|
System and method for gesture based control system | ||
Patent #
US 20060187196A1
Filed 02/08/2006
|
Current Assignee
Oblong Industries Inc.
|
Sponsoring Entity
Oblong Industries Inc.
|
Three-dimensional image display system | ||
Patent #
US 20060203363A1
Filed 08/04/2004
|
Current Assignee
Patrick Levy-Rosenthal
|
Sponsoring Entity
Patrick Levy-Rosenthal
|
Virtual mouse driving apparatus and method using two-handed gestures | ||
Patent #
US 20060209021A1
Filed 10/17/2005
|
Current Assignee
Electronics and Telecommunications Research Institute
|
Sponsoring Entity
Electronics and Telecommunications Research Institute
|
Gesture recognition system | ||
Patent #
US 7,129,927 B2
Filed 09/13/2002
|
Current Assignee
MOTUVERI AB
|
Sponsoring Entity
MATTSSON HANS ARVID
|
Selective surveillance system with active sensor management policies | ||
Patent #
US 20050012817A1
Filed 07/15/2003
|
Current Assignee
International Business Machines Corporation
|
Sponsoring Entity
International Business Machines Corporation
|
Holographic human-machine interfaces | ||
Patent #
US 20050002074A1
Filed 07/02/2004
|
Current Assignee
HoloTouch Inc.
|
Sponsoring Entity
HoloTouch Inc.
|
Method and system for free-space imaging display and interface | ||
Patent #
US 6,857,746 B2
Filed 05/07/2003
|
Current Assignee
IO2 TECHNOLOGY LLC
|
Sponsoring Entity
IO2 TECHNOLOGY LLC
|
Method and apparatus for determining a bidirectional reflectance distribution function of a subject | ||
Patent #
US 20050068537A1
Filed 06/01/2004
|
Current Assignee
New York University
|
Sponsoring Entity
New York University
|
Enhanced environment visualization using holographic stereograms | ||
Patent #
US 20050052714A1
Filed 07/26/2004
|
Current Assignee
FoVI 3D Inc.
|
Sponsoring Entity
FoVI 3D Inc.
|
METHOD FOR CREATING A HOLOGRAPHIC SCREEN THAT RECONSTRUCTS UNIFORMLY MAGNIFIED THREE-DIMENSIONAL IMAGES FROM PROJECTED INTEGRAL PHOTOGRAPHS | ||
Patent #
US 20050088714A1
Filed 12/05/2004
|
Current Assignee
Stanley H. Kremen
|
Sponsoring Entity
Stanley H. Kremen
|
Interactive video window display system | ||
Patent #
US 20050110964A1
Filed 09/20/2004
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
Microsoft Technology Licensing LLC
|
Systems and methods of interfacing with a machine | ||
Patent #
US 20050166163A1
Filed 01/23/2004
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Sponsoring Entity
Hewlett-Packard Development Company L.P.
|
Interactive presentation system | ||
Patent #
US 20050151850A1
Filed 12/02/2004
|
Current Assignee
Korea Institute of Science and Technology
|
Sponsoring Entity
Korea Institute of Science and Technology
|
Gesture-controlled interfaces for self-service machines and other applications | ||
Patent #
US 6,950,534 B2
Filed 01/16/2004
|
Current Assignee
Jolly Seven Series 70 of Allied Security Trust I
|
Sponsoring Entity
Cybernet Systems Corporation
|
Self-propelled cleaner with surveillance camera | ||
Patent #
US 20050237388A1
Filed 04/15/2005
|
Current Assignee
Funai Electric Co. Ltd.
|
Sponsoring Entity
Funai Electric Co. Ltd.
|
Imaging apparatus | ||
Patent #
US 20050285945A1
Filed 04/25/2005
|
Current Assignee
Maxell Ltd.
|
Sponsoring Entity
Maxell Ltd.
|
Holographic projector | ||
Patent #
US 20050286101A1
Filed 04/07/2005
|
Current Assignee
Board of Regents of the University of Texas System
|
Sponsoring Entity
Board of Regents of the University of Texas System
|
System for physical rotation of volumetric display enclosures to facilitate viewing | ||
Patent #
US 20050275628A1
Filed 08/22/2005
|
Current Assignee
Autodesk Inc.
|
Sponsoring Entity
Autodesk Inc.
|
Gesture-controlled interfaces for self-service machines and other applications | ||
Patent #
US 6,681,031 B2
Filed 08/10/1999
|
Current Assignee
Jolly Seven Series 70 of Allied Security Trust I
|
Sponsoring Entity
Cybernet Systems Corporation
|
Simulated human interaction systems | ||
Patent #
US 6,695,770 B1
Filed 02/19/2002
|
Current Assignee
Dominic Kin Leugn Choy
|
Sponsoring Entity
Dominic Kin Leugn Choy
|
Intruding-object detection apparatus | ||
Patent #
US 20040041905A1
Filed 08/27/2003
|
Current Assignee
Subaru Corp.
|
Sponsoring Entity
Subaru Corp.
|
Electronic device for processing image-data, for simulating the behaviour of a deformable object | ||
Patent #
US 6,714,901 B1
Filed 07/16/1999
|
Current Assignee
INRIA Institut National de Recherche en Informatique en Automatique
|
Sponsoring Entity
INRIA Institut National de Recherche en Informatique en Automatique
|
Providing input signals | ||
Patent #
US 20040046747A1
Filed 08/22/2003
|
Current Assignee
Eugenio Bustamante
|
Sponsoring Entity
Eugenio Bustamante
|
Man machine interfaces and applications | ||
Patent #
US 6,720,949 B1
Filed 08/21/1998
|
Current Assignee
Peter Smith, Timothy R. Pryor
|
Sponsoring Entity
Peter Smith, Timothy R. Pryor
|
Data input device | ||
Patent #
US 20040108990A1
Filed 12/18/2003
|
Current Assignee
VKB INC.
|
Sponsoring Entity
VKB INC.
|
New input devices for augmented reality applications | ||
Patent #
US 20040113885A1
Filed 05/22/2002
|
Current Assignee
Siemens Corp.
|
Sponsoring Entity
Siemens Corp.
|
Robust stereo-driven video-based surveillance | ||
Patent #
US 20040125207A1
Filed 07/30/2003
|
Current Assignee
Siemens Corp.
|
Sponsoring Entity
Siemens Corp.
|
Interactive directed light/sound system | ||
Patent #
US 20040183775A1
Filed 12/15/2003
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
Microsoft Technology Licensing LLC
|
System and method for gesture recognition in three dimensions using stereo imaging and color vision | ||
Patent #
US 6,788,809 B1
Filed 06/30/2000
|
Current Assignee
Intel Corporation
|
Sponsoring Entity
Intel Corporation
|
Generating a matte signal from a retro reflective component of a front projection screen | ||
Patent #
US 6,796,656 B1
Filed 06/14/2003
|
Current Assignee
iMatte Inc.
|
Sponsoring Entity
iMatte Inc.
|
Architecture for controlling a computer using hand gestures | ||
Patent #
US 20040193413A1
Filed 12/01/2003
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
-
|
Multi-planar volumetric display system and method of operation using multi-planar interlacing | ||
Patent #
US 6,806,849 B2
Filed 03/20/2002
|
Current Assignee
LIGHTSPACE TECHNOLOGIES INC.
|
Sponsoring Entity
LIGHTSPACE TECHNOLOGIES AB
|
Method of intruder detection and device thereof | ||
Patent #
US 20040239761A1
Filed 03/30/2004
|
Current Assignee
S1 Corporation
|
Sponsoring Entity
-
|
Video based detection of fall-down and other events | ||
Patent #
US 20030058341A1
Filed 07/03/2002
|
Current Assignee
Koninklijke Philips N.V.
|
Sponsoring Entity
Koninklijke Philips N.V.
|
System and method for three-dimensional data acquisition | ||
Patent #
US 20030067537A1
Filed 11/14/2001
|
Current Assignee
Edward Greenberg, Michael Perry
|
Sponsoring Entity
Edward Greenberg, Michael Perry
|
Extended virtual table: an optical extension for table-like projection systems | ||
Patent #
US 20030085866A1
Filed 01/17/2002
|
Current Assignee
FRAUNHOFER-GESSELSHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
|
Sponsoring Entity
FRAUNHOFER-GESSELSHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
|
Gesture recognition system using depth perceptive sensors | ||
Patent #
US 20030156756A1
Filed 02/18/2003
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
-
|
Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices | ||
Patent #
US 20030218761A1
Filed 09/17/2002
|
Current Assignee
HEILONGJIANG GOLDEN JUMPING GROUP CO. LTD.
|
Sponsoring Entity
-
|
Imaging system for displaying image information that has been acquired by means of a medical diagnostic imaging device | ||
Patent #
US 6,359,612 B1
Filed 09/29/1999
|
Current Assignee
Siemens AG
|
Sponsoring Entity
Siemens AG
|
Method and device for detecting an object in an area radiated by waves in the invisible spectral range | ||
Patent #
US 6,353,428 B1
Filed 07/23/1999
|
Current Assignee
GESTURETEK SYSTEMS INC.
|
Sponsoring Entity
Siemens AG
|
Hand recognition with position determination | ||
Patent #
US 20020090146A1
Filed 01/07/2002
|
Current Assignee
Siemens AG
|
Sponsoring Entity
Siemens AG
|
System and method for determining the location of a target in a room or small area | ||
Patent #
US 20020093666A1
Filed 01/17/2001
|
Current Assignee
Fuji Xerox Company Limited, Xerox Corporation
|
Sponsoring Entity
Fuji Xerox Company Limited, Xerox Corporation
|
Hand pointing apparatus | ||
Patent #
US 6,434,255 B1
Filed 10/23/1998
|
Current Assignee
Takenaka Corporation
|
Sponsoring Entity
Takenaka Corporation
|
Information processing system | ||
Patent #
US 20020126161A1
Filed 05/30/2001
|
Current Assignee
Hitachi Ltd.
|
Sponsoring Entity
Hitachi Ltd.
|
Method and system for compensating for parallax in multiple camera systems | ||
Patent #
US 20020122113A1
Filed 11/20/2001
|
Current Assignee
Fuji Xerox Company Limited
|
Sponsoring Entity
Fuji Xerox Company Limited
|
Video safety detector with shadow elimination | ||
Patent #
US 6,469,734 B1
Filed 04/29/2000
|
Current Assignee
Cognex Corporation
|
Sponsoring Entity
Cognex Corporation
|
Interactive video display system | ||
Patent #
US 20020186221A1
Filed 05/28/2002
|
Current Assignee
Microsoft Technology Licensing LLC
|
Sponsoring Entity
Microsoft Technology Licensing LLC
|
System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs | ||
Patent #
US 6,195,104 B1
Filed 12/23/1997
|
Current Assignee
Philips Electronics North America Corporation
|
Sponsoring Entity
Philips Electronics North America Corporation
|
Holographic projection screen for displaying a three-dimensional color images and optical display system using the holographic screen | ||
Patent #
US 20010006426A1
Filed 01/25/2001
|
Current Assignee
Kulnrud Groll AB LLC
|
Sponsoring Entity
Kulnrud Groll AB LLC
|
HAND POINTING DEVICE | ||
Patent #
US 20010043719A1
Filed 03/18/1998
|
Current Assignee
Takenaka Corporation
|
Sponsoring Entity
Takenaka Corporation
|
Image transformation and synthesis methods | ||
Patent #
US 6,327,381 B1
Filed 01/09/1998
|
Current Assignee
Worldscape Inc.
|
Sponsoring Entity
Worldscape Inc.
|
Hand gesture recognition system and method | ||
Patent #
US 6,128,003 A
Filed 12/22/1997
|
Current Assignee
Hitachi Ltd.
|
Sponsoring Entity
Hitachi Ltd.
|
Interactive movement and contact simulation game | ||
Patent #
US 5,913,727 A
Filed 06/13/1997
|
Current Assignee
Ahdoot Ned
|
Sponsoring Entity
-
|
Virtual reality control using image, model and control data to manipulate interactions | ||
Patent #
US 5,999,185 A
Filed 03/30/1993
|
Current Assignee
Toshiba Corporation
|
Sponsoring Entity
Toshiba Corporation
|
Hand gesture control system | ||
Patent #
US 6,002,808 A
Filed 07/26/1996
|
Current Assignee
Mitsubishi Electric Research Laboratories
|
Sponsoring Entity
Mitsubishi Electric Information Technology Center America Inc.
|
Graphical input controller and method with rear screen image detection | ||
Patent #
US 5,483,261 A
Filed 10/26/1993
|
Current Assignee
ORGPRO NEXUS INC.
|
Sponsoring Entity
ITU Research Inc.
|
Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment | ||
Patent #
US 5,563,988 A
Filed 08/01/1994
|
Current Assignee
Massachusetts Institute of Technology
|
Sponsoring Entity
Massachusetts Institute of Technology
|
Method and apparatus for reconstructing three-dimensional objects | ||
Patent #
US 5,475,422 A
Filed 06/20/1994
|
Current Assignee
Nippon Telegraph and Telephone Corporation
|
Sponsoring Entity
Nippon Telegraph and Telephone Corporation
|
Mechanism for determining parallax between digital images | ||
Patent #
US 5,220,441 A
Filed 09/28/1990
|
Current Assignee
Eastman Kodak Company
|
Sponsoring Entity
Eastman Kodak Company
|
Method for directly measuring area and volume using binocular stereo vision | ||
Patent #
US 4,924,506 A
Filed 11/05/1987
|
Current Assignee
Schlumberger Systems Services Incorporated
|
Sponsoring Entity
Schlumberger Systems Services Incorporated
|
Real time perception of and response to the actions of an unencumbered participant/user | ||
Patent #
US 4,843,568 A
Filed 04/11/1986
|
Current Assignee
Myron W. Krueger, Thomas S. Gionfriddo, Katrin Hinrichsen
|
Sponsoring Entity
Myron W. Krueger, Thomas S. Gionfriddo, Katrin Hinrichsen
|
Apparatus and method for remote displaying and sensing of information using shadow parallax | ||
Patent #
US 4,468,694 A
Filed 10/15/1982
|
Current Assignee
International Business Machines Corporation
|
Sponsoring Entity
International Business Machines Corporation
|
32 Claims
- 1. An intrusion detection system comprising:
a first camera configured to generate first images of a monitored area; a second camera configured to generate second images of the monitored area; a detection device configured to compare the first images with a background image of the monitored area, the detection device marking differences between the first images and the background image as a potential intruder; and a tracking device configured to evaluate each of the first images relative to each of the second images to determine three-dimensional characteristics associated with the potential intruder, the tracking device comprising a threshold comparator configured to allow a user of the intrusion detection system to designate at least one three-dimensional space within the monitored area, the at least one three-dimensional space being one of a threshold space and a null zone, the tracking device activating the indicator in response to one of motion into and motion toward the threshold space, and the tracking device not activating the indicator in response to differences between the first images and the background image in the null zone. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 27, 28, 29)
- 11. A method for detecting intruders in a monitored area, the method comprising:
acquiring first images of the monitored area from a first camera and second images of the monitored area from a second camera; generating a first background image of the monitored area associated with the first camera and a second background image of the monitored area associated with the second camera; correlating first pixels associated with the first images with second pixels associated with the first background image of the monitored area, such that the first pixels are horizontally and vertically aligned with the second pixels; correlating third pixels associated with the second images with fourth pixels associated with the second background image of the monitored area, such that the third pixels are horizontally and vertically aligned with the fourth pixels; comparing the first images with the first background image of the monitored area and the second images with the second background image to determine the presence of a potential intruder; determining three-dimensional characteristics of the potential intruder based on a relative comparison of the first images and the second images; and activating an indicator upon the three-dimensional characteristics of the potential intruder exceeding at least one predetermined threshold. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20, 30, 31)
- 21. An intrusion detection system comprising:
means for simultaneously acquiring first images and second images of a monitored area; means for continuously generating a background image of the monitored area; means for generating a plurality of filtered first images corresponding to each of the first images and a plurality of filtered background images, each of the plurality of filtered first images having a different resolution; means for detecting a potential intruder based on differences between each of the plurality of filtered first images and a corresponding one of the plurality of filtered background images having a same resolution; means for determining three-dimensional characteristics of the potential intruder based on the first images and the second images; and means for activating an indicator based on the three-dimensional characteristics of the potential intruder. - View Dependent Claims (22, 23, 24, 25, 26, 32)
1 Specification
The present invention relates generally to intrusion detection systems, and more specifically to a stereo camera intrusion detection system.
In modern society and throughout recorded history, there has always been a demand for security measures. Such measures have been used to prevent theft, unauthorized access to sensitive materials and areas, and in a variety of other applications. One such common security measure includes intrusion detection systems. Typically, intrusion detection systems incorporate video surveillance that includes monitoring video feeds acquired by one or more video cameras that are situated around a perimeter of a facility sought to be protected. The monitoring of the video feeds is typically accomplished by a human, such as by security personnel or by the police. However, because potential security threats are isolated events amidst long, otherwise uneventfull time spans, boredom can be a significant problem, thus resulting in lapses of security.
To overcome the problem of boredom, some automated intrusion detection systems have been developed. Such automated systems can incorporate various computer vision algorithms to assist human monitoring. Typically, a change detection algorithm is used to identify regions within the monitored area that may merit more careful review by the monitoring human. However, such systems can be highly prone to registering false positives, such as resulting from environmental variation, for example distant background changes, wind-blown shrubbery, camera vibration, changing brightness from passing clouds, and moving light beams at night. As such, the resultant high rate of false positives can fatigue even the most experienced human security monitors. To overcome the false positive conditions, some systems may allow an operator to draw null zones that prohibit activity in the null zone from tripping the alarm. However, such a solution can provide an opportunity for a false negative result, thus resulting in a lapse in security.
In addition to the problems associated with boredom, typical automated intrusion detection systems can suffer from a number of additional drawbacks. For example, camera based intrusion detection systems typically include a camera that is mounted at a greatly elevated position looking down. As such, it can determine a location of an intruder based solely on the location of the intruder on the ground within the field of view of the camera. However, such an arrangement can be difficult to install and maintain, and can be expensive by requiring special mounting equipment and accessories. In addition, such systems may have a limited field of view. As such, an intruder may be able to see such systems before the intruder is detected, thus allowing the intruder an opportunity to take advantage of blind-spots, or devise other counter-measures to defeat the automated intrusion detection system.
One embodiment of the present invention includes an intrusion detection system. The intrusion detection system comprises a first camera configured to acquire first images of a monitored area and a second camera configured to acquire second images of the monitored area. The intrusion detection system also comprises a detection device configured to compare the first images with a background image of the monitored area. The detection device can mark differences between the first images and the background image as a potential intruder. The intrusion detection system further comprises a tracking device configured to evaluate each of the first images relative to each of the second images to determine three-dimensional characteristics associated with the potential intruder.
Another embodiment of the present invention includes a method for detecting intruders in a monitored area. The method comprises acquiring first images of the monitored area from a first camera, acquiring second images of the monitored area from a second camera, and generating a background image of the monitored area. The method also comprises correlating first pixels associated with the first images with second pixels associated with the background image of the monitored area, such that the first pixels are horizontally and vertically aligned with the second pixels. The method also comprises comparing the first images and the background image of the monitored area to determine the presence of a potential intruder. The method further comprises determining three-dimensional characteristics of the potential intruder based on a relative comparison of the first images and the second images, and activating an indicator upon the three-dimensional characteristics of the potential intruder exceeding at least one predetermined threshold.
Another embodiment of the present invention includes an intrusion detection system. The intrusion detection system comprises means for simultaneously acquiring first images and second images of a monitored area. The intrusion detection system also comprises means for continuously generating a background image of the monitored area. The intrusion detection system also comprises means for detecting a potential intruder based on differences between the first images and the background image. The intrusion detection system further comprises means for determining three-dimensional characteristics of the potential intruder based on the first images and the second images, and means for activating an indicator based on the three-dimensional characteristics of the potential intruder.
The present invention relates generally to intrusion detection systems, and more specifically to a stereo camera intrusion detection system. A pair of stereo cameras each acquire concurrent images of the monitored area. The acquired images from one or both of the cameras can be compared with a background image. Thus, the background image can be generated from one of the cameras, or the cameras can be compared with separate background images. The background image can be constantly updated based on each of the acquired images to slowly account for subtle changes in the monitored area environment. In addition, the background image can be correlated with each of the acquired images, such that the pixels of each of the acquired images and the background image can be horizontally and vertically aligned.
Upon detecting a difference in the acquired images and the background image, the pixels that are different can be outlined as a potential intruder. The acquired images from each of the cameras can be correlated to determine a parallax separation of the two-dimensional location of the potential intruder in the acquired image from one camera relative to the other. Upon determining the parallax separation, a three-dimensional location, size, and movement of the potential intruder can be determined. The location, size, and/or movement of the potential intruder can be compared with at least one predetermined threshold, and an indicator can be sounded upon the potential intruder exceeding the predetermined threshold.
The ability to detect the three-dimensional location of a potential intruder can provide for an optimal placement of the stereo image acquisition stage 12. As the stereo camera intrusion detection system 10 is able to detect the three-dimensional location of the potential intruder, it is not necessary to mount the stereo image acquisition stage 12 in an elevated location. Instead, the stereo image acquisition stage 12 could be mounted at approximately floor level and parallel to the floor. Such a mounting arrangement can be significantly less expensive than an elevated placement, and is far less conspicuous than an elevated placement. As such, potential intruders may not be able to detect the stereo image acquisition stage 12, and could thus be deprived of the opportunity to hide or perform defeating counter-measures.
The acquired images are output from the stereo image acquisition stage 12 to a token detector 18. The token detector 18 is configured to compare the acquired images from the first camera 14 and/or the second camera 16 with a background image. As will be described below in greater detail, the token detector 18 can determine the presence of a potential intruder based on the differences between the pixels associated with each of the acquired images from the first camera 14 and/or the second camera 16 and the pixels associated with the background image. In addition or alternatively, as will also be described below in greater detail, the token detector 18 can determine the presence of a potential intruder based on differences in texture between the acquired images from the first camera 14 and/or the second camera 16 and the background image.
The background image can be generated by a background image generator 20. The background image generator 20 is demonstrated in the example of
The background image generator 20 can be continuously generating the background image by periodically updating the background image with a plurality of pixels of the acquired images from the first camera 14 and/or the second camera 16. As such, gradual environment changes in the monitored area, such as shadows cast by the passing sun, can be incorporated into the background image. The stereo camera intrusion detection system 10 thus may not register a false positive based on the gradual environment changes. In addition, the background image generated by the background image generator 20 can be stabilized. The stabilized background image can be horizontally and vertically aligned with the acquired images from the first camera 14 and/or the second camera 16 to compensate for camera bounce, as will be described in greater detail in the example of
For each of the acquired images from each of the first camera 14 and the second camera 16, the token detector 18 can generate a difference image that demonstrates an absolute value of the pixel difference between the respective acquired image and the background image. The token detector 18 can then perform a pixel filling algorithm on the difference image, such that pixels that the difference pixels that are close together on the difference image can be connected to demonstrate a candidate token. The candidate token could represent a potential intruder.
In an alternative embodiment, as demonstrated in the example of
The stereo camera intrusion detection system 10 also includes an image tracker 22. The image tracker 22 includes a location acquisition engine 24, a three-dimensional motion calculator 26, and a threshold comparator 28. The token detector 18 communicates the location of the candidate token to the location acquisition engine 24. In addition, the stereo image acquisition stage 12 can transmit the images obtained by one or both of the first camera 14 and the second camera 16 to the location acquisition engine 24. The location acquisition engine 24 is configured to determine a three-dimensional location and size associated with the potential intruder. For example, the location acquisition engine 24 can combine the images obtained from the first camera 14 with the images obtained from the second camera 16. The location acquisition engine 24 can then apply a correlation algorithm to the respective images obtained from the first and second cameras 14 and 16 to determine a relative two-dimensional location of the candidate token in the images obtained by the first camera 14 relative to the images obtained by the second camera 16. Thus, the location acquisition engine 24 can determine the three-dimensional location and size of the potential intruder based on a parallax separation of the potential intruder in the images obtained by the first camera 14 relative to the second camera 16.
In determining the parallax separation of the potential intruder, the location acquisition engine 24 can apply an image filtering algorithm to each of the images obtained by the first camera 14 and the second camera 16 to obtain first filtered images and second filtered images, respectively. The filtering algorithm could be, for example, a Sign of Laplacian of Gaussian (SLOG) filtering algorithm. In addition, the location acquisition engine 24 could apply multiple filtering algorithms to each image from the first camera 14 and the second camera 16, such that each filtered image could have a different resolution. The location acquisition engine 24 could then overlay the first filtered images onto the second filtered images and apply the correlation algorithm. The overlay could include overlaying the candidate token in the first filtered images over an approximate location of the potential intruder in the second filtered images, as communicated from the token detector 18. In the example of a stereo camera intrusion detection system 10 determining a candidate token on the images obtained by both the first camera 14 and the second camera 16, the location acquisition engine 24 may not apply the image filtering algorithm, but may simply overlay the difference image from the first camera 14 onto the difference image from the second camera 16 before applying the correlation algorithm.
As an example, the correlation algorithm could include an iterative pixel shift algorithm that shifts the first filtered image relative to the second filtered image by at least one pixel per shift and compares the first filtered images to the respective second filtered images at each respective shift. The comparison could include determining a correlation score for each shift. Upon determining a shift having a highest correlation score, the location acquisition engine 24 can determine the parallax separation of the potential intruder based on a number of pixels of offset between the first images and the second images. The location acquisition engine 24 could then convert the number of pixels of offset to a unit of measure in three-dimensional space to determine a three-dimensional location and size of the potential intruder.
It is to be understood that, in determining a three-dimensional location and size of a potential intruder, the location acquisition engine 24 evaluates a given image from the first camera 14 relative to a respective image from the second camera 16 that is acquired at substantially the same time. As such, the location acquisition engine 24 outputs the three-dimensional location and size information associated with each frame of the acquired images from the first camera 14 and the second camera 16 to the three-dimensional motion calculator 26. The three-dimensional motion calculator 26 can track changes in the three-dimensional location and size of the potential intruder across multiple images, and thus multiple frames, of the first camera 14 and the second camera 16. The changes in the three-dimensional location and size of the potential intruder across the images of the first camera 14 and the second camera 16 can be determinative of three-dimensional motion associated with the potential intruder, such as direction and velocity of motion.
The location, size, and motion information associated with the potential intruder can be output to the threshold comparator 28, which is coupled to an indicator 30. The indicator 30 can, for example, be an audible alarm, a visual indicator, or any of a variety of other indication devices. In addition, the indicator 30 can be coupled to a network, such that the indicator 30 can be located at a facility that is remote from the monitored area, such as a police station. The threshold comparator 28 can be programmed with any of a variety of predetermined threshold conditions sufficient to signal the indicator 30 to an operator (e.g., security guard and/or police officer) of the stereo camera intrusion detection system 10. For example, upon a potential intruder being determined to be a size greater than, for example, the size of a small dog, the threshold comparator 28 could signal the indicator 30. The size threshold could be specific to height, and not just overall size. In such a way, the threshold comparator 28 can ensure that false positive conditions do not result from potential intruders that do not warrant attention by the given operators of the stereo camera intrusion detection system 10, such as birds or rabbits.
The threshold condition could also be indicative of a given velocity of the potential intruder, such that, for example, automobiles traveling at or above a certain speed can signal the indicator 30. In addition, an operator of the stereo camera intrusion detection system 10 can designate three-dimensional portions of the monitored area as threshold zones or null zones. Thus, the threshold comparator 28 can signal the indicator 30 upon the potential intruder moving into or toward a threshold zone. Likewise, the threshold comparator 28 could disable the indicator 30 upon detecting a potential intruder in a null zone, such that three-dimensional portions of the monitored area that are not of particular interest for security monitoring can be disabled. As such, false positive conditions resulting from certain environmental changes, such as swaying branches, can be mitigated. It is to be understood that the threshold comparator 28 can also be programmed to apply any of a number of thresholds, as well as thresholds to be applied in combination. For example, the threshold comparator 28 can be programmed to signal the indicator 30 only when a potential intruder is both a certain predetermined size and is moving a certain predetermined velocity.
It is to be understood that the stereo camera intrusion detection system 10 is not intended to be limited to the example demonstrated in
The background image generator 50 also includes a background image updater 56. The background image updater 56 periodically receives inputs from the camera 52 to update the acquired background image 54 to account for gradual changes occurring within the monitored area. For example, the background image updater 56 can periodically add a plurality of pixels from the images acquired by the camera 52 to the acquired background image 54. It is to be understood that compromises can be made in determining the speed at which the acquired background image 54 is updated. For example, if the acquired background image 54 is updated too rapidly, then a potential intruder could become part of the acquired background image 54, thus resulting in a possible false negative result. Conversely, if the acquired background image 54 is updated too slowly, then the gradual changes in the environment of the monitored area could result in the generation of candidate tokens, thus resulting in a possible false positive result. As such, the background image updater 56 can be programmable as to the amount of pixels added to the acquired background image 54, as well as how often the pixels are added to the acquired background image 54.
As described above regarding the stereo camera intrusion detection system 10 in the example of
To account for camera bounce, the background image generator 50 includes a background image stabilizer 58. The background image stabilizer 58 is configured to vertically and horizontally align the acquired images of the camera 52 relative to the acquired background image 54, such that the acquired background image 54 is stabilized. The background image stabilizer 58 receives each acquired image from the camera 52 and the acquired background image 54 as inputs. The acquired background image 54 is input to a background Sign of Laplacian of Gaussian (SLOG) filter 60 and the acquired images from the camera 52 are each input to an image SLOG filter 62. It is to be understood that the background SLOG filter 60 and the image SLOG filter 62 are not limited to SLOG filters, but could be any of a variety of bandpass image filters. The background SLOG filter 60 and the image SLOG filter 62 operate to convert the respective images into filtered images, such that the filtered images highlight texture contrasts. It is also to be understood that the acquired background image 54 may only be input to the background SLOG filter 60 upon every update by the background image updater 56.
The first filtered image 102, a second filtered image 104, and a third filtered image 106 each have varying degrees of resolution in demonstrating binary texture contrasts of the camera image 100. The first filtered image 102 is a low-resolution filtered representation of the camera image 100, the second filtered image 104 is a medium-resolution filtered representation of the camera image 100, and the third filtered image 106 is a high-resolution filtered representation of the camera image 100. A given SLOG filter, such as the background SLOG filter 60 and the image SLOG filter 62 in the example of
Referring back to
At every pixel shift, the filtered image correlator 64 can determine a correlation score. The correlation score can be representative of how well the filtered acquired background image is aligned with the filtered acquired visual image based on how many of the binary texture pattern pixels agree. The shifting can be in both the vertical and horizontal directions, and shifting can occur across the entire image or across a portion of the image, such that positive and negative pixel shift bounds can be set in both the vertical and horizontal directions. The pixel shift resulting in the highest correlation score can be representative of the appropriate alignment between the filtered acquired background image and the filtered acquired visual image. It is to be understood that the filtered image correlator 64 can perform the correlation for each filtered acquired visual image associated with each frame of the camera 52, relative to the filtered acquired background image.
Upon determining the appropriate correlation between the filtered acquired background image and the filtered acquired visual image, the filtered image correlator 64 communicates the number of pixels shifted to achieve correlation to an image shifter 66. The image shifter 66 receives the acquired background image 54 and shifts the acquired background image 54 by the number of pixels communicated to it by the filtered image correlator 64. The shifted acquired background image is then output from the image shifter 66 to a token detector, such as the token detector 18 demonstrated in the example of
It is to be understood that the background image generator 50 is not limited to the example of
The stereo camera intrusion detection system 150 includes a token detector 158. The token detector 158 includes an image comparator 160, an image filler 162, and a token locater 164. The visual images acquired by the camera 154 are output to the image comparator 160. The image comparator 160 is configured to compare the acquired images from the first camera 154 with a background image generated from a background image generator 166. The background image generator 166 can be substantially similar to the background image generator 50 as described in the example of
The image comparator 160 applies an absolute value pixel difference algorithm to generate a difference image. The pixel differences can be based on texture, brightness, and color contrast. The difference image thus demonstrates substantially all the pixels that are different between each of the acquired images from the camera 154 and the background image. It is to be understood that the image comparator 160 thus generates a difference image for each of the images corresponding to each of the frames output from the camera 154.
The difference image alone, however, may not be able to accurately portray an absolute value pixel image of a potential intruder. For example, an intruder wearing black may sneak in front of a dark background surface in the monitored area. The image comparator 160 may be able to distinguish the intruder'"'"'s hands, face, and shoes in applying the absolute value pixel difference algorithm, but there may be parts of the intruder'"'"'s body in the acquired images that are indistinguishable by the image comparator 160 from the background image. As such, the image comparator 160 outputs the difference image to the image filler 162.
The image filler 162 applies a pixel filling algorithm to the difference image, such that it can be determined whether a candidate token exists. The pixel filling algorithm connects pixels that are close together in the difference image, such that connected pixels can take shape for determination of the presence of a candidate token. For example, the image filling algorithm could begin with a horizontal fill, such as left-right on the difference image, such that pixels on a horizontal line that are within a certain predefined pixel distance from each other can be connected first. The predefined pixel distance can be tuned in such a way as to prevent nuisance fills that could result in false positive results. The image filling algorithm could then apply a similar operation in the vertical direction on the difference image. As a result, closely-grouped disjointed pixels can be filled in to account for inaccuracies in detecting the absolute value pixel difference that can result. Thus, in the above example of the camouflaged intruder, the intruder can still be found, as the intruder'"'"'s hands, face, and shoes can be filled in to form a two-dimensional pixelated “blob” on the difference image.
The filled-in difference image is output from the image filler 162 to the token locater 164. The filled-in pixel group on the filled-in difference image can be examined by the token locater 164. If the filled-in pixel group, such as the two-dimensional pixelated “blob” of the intruder, exceeds predefined shape thresholds, the token locater 164 can mark the filled-in pixel group as a candidate token. The token locater 164 thus determines the pixel coordinates pertaining to the candidate token location on the filled-in difference image and communicates the two-dimensional pixel location information of the candidate token, as a signal TKN_LOC, to a location acquisition engine 168. The candidate token, as described above in the example of
The acquired images of each of the cameras 154 and 156 are also output to the location acquisition engine 168. The location acquisition engine 168, similar to that described above in the example of
The three overlayed filtered image pairs are output from the image overlay combiner 202 to an iterative pixel shifter 204. In the example of
For each pixel shift, a high-resolution correlation score calculator 206, a medium-resolution correlation score calculator 208, and a low-resolution correlation score calculator 210 calculates a correlation score for each of the respective filtered image pairs. Because each of the filtered image pairs have a different resolution relative to each other, the correlation scores for each of the respective filtered image pairs may be different, despite the pixel shifting of each of the filtered image pairs being the same. For example, concurrently shifting each of the filtered images R_HIGH, R_MED, and R_LOW relative to the respective filtered images L_HIGH, L_MED, and L_LOW by one pixel in the +X direction could yield a separate correlation score for each of the filtered image pairs. It is to be understood that, in the example of
To account for separate correlation scores, the high-resolution correlation score calculator 206, the medium-resolution correlation score calculator 208, and the low-resolution correlation score calculator 210 each output the respective correlation scores to an aggregate correlation calculator 212. The aggregate correlation calculator 212 can determine an aggregate correlation score based on the separate respective resolution correlation scores. The aggregate correlation calculator 212 can be programmed to determine the aggregate correlation score in any of a variety of ways. As an example, the aggregate correlation calculator 212 can add the correlation scores, can average the correlation scores, or can apply a weight factor to individual correlation scores before adding or averaging the correlation scores. As such, the aggregate correlation score can be determined for each pixel shift in any way suitable for the correlation determination.
The aggregate correlation score is output from the aggregate correlation calculator to a correlation score peak detector 214. The correlation score peak detector 214 compares the aggregate correlation scores for each shift of the filtered image pairs and determines which shift is the optimal shift for correlation. Upon determining the shift that corresponds to the best correlation for the filtered image pairs, the correlation score peak detector 214 outputs the number of pixels of offset of the filtered image pair for optimal correlation.
It is to be understood that the image correlator 200 is not limited to the example of
Referring back to
The determination of range of the potential intruder thus directly corresponds to a three-dimensional location of the potential intruder relative to the stereo acquisition stage 152. Upon determining the three-dimensional location of the potential intruder, the dimension conversion engine 176 can determine a size of the potential intruder. For example, upon determining the three-dimensional location of the potential intruder, a number of pixels of dimension of the candidate token in the vertical and horizontal directions can be converted to the unit of measure used in determining the range. As such, a candidate token that is only a couple of pixels wide and is determined to be two meters away from stereo image acquisition stage 152 could be a mouse. A candidate token that is the same number of pixels wide and is determined to be hundreds of meters away could be an automobile. Therefore, the three-dimensional location and size of the potential intruder is determined based on a parallax separation of the two-dimensional location of the potential intruder (i e., the candidate token) in an image acquired by the first camera 154 relative to the two-dimensional location of the potential intruder in the image acquired by the second camera 156. The three-dimensional location and size data can be output from the dimension conversion engine 176 to a three-dimensional motion calculator and/or a threshold comparator, as described above regarding the example of
The first image comparator 260 and the second image comparator 266 are each configured to compare the acquired images from the first camera 254 and the second camera 256, respectively, with a background image generated from a first background image generator 272 and a second background image generator 274. Each of the first background image generator 272 and the second background image generator 274 can be substantially similar to the background image generator 50 as described in the example of
Similar to the image comparator 160 in the example of
The location acquisition engine 276 includes an image correlator 278 and a dimension conversion engine 280. The image correlator 278 receives the filled-in difference images from the respective first image filler 262 and second image filler 268 as inputs, as well as the two-dimensional pixel location information of the respective candidate tokens from the respective token locaters 264 and 270. The image correlator 278 overlays the filled-in difference images at the pixel locations of the respective candidate tokens, as communicated by the respective token locaters 264 and 270. The image correlator 278 then applies a pixel shifting algorithm to determine the optimal correlation of the pair of filled-in difference images, similar the image correlator 200 described in the above example of
The number of pixels of offset between the filled-in difference images for optimal correlation is output from the image correlator 278 to the dimension conversion engine 280. The dimension conversion engine 280 examines the pixel offset in the correlation of the filled-in difference image pair and converts the pixel offset into a unit of measure that corresponds to an amount of range that the potential intruder is away from the stereo image acquisition stage 252. The range can then be used to determine a three-dimensional location and size of the potential intruder, similar to as described above in the example of
The token detector 308 includes a first image SLOG filter 310, a first filtered image comparator 312, and a first token locater 314. The token detector 308 also includes a second image SLOG filter 316, a second filtered image comparator 318, and a second token locater 320. The visual images acquired by the first camera 304 are output to the first image SLOG filter 310, and the visual images acquired by the second camera 306 are output to the second image SLOG filter 316. The first image SLOG filter 310 and the second image SLOG filter 316 can each generate one or more filtered images of the respective acquired images from the first camera 304 and the second camera 306. For example, each of the first image SLOG filter 310 and the second image SLOG filter 316 can generate a high-resolution filtered image, a medium-resolution filtered image, and a low-resolution filtered image, such as described above in the example of
A first background image generator 322 generates a background image based on the first camera 304 and outputs the background image to a first background SLOG filter 324. The first background SLOG filter 324 can generate a number of filtered images of the background image equal to the number of filtered images generated by the first image SLOG filter 310. For example, the first background SLOG filter 324 can generate a high-resolution filtered background image, a medium-resolution filtered background image, and a low-resolution filtered background image, with each resolution corresponding to a resolution of the filtered images generated by the first image SLOG filter 310. In a likewise manner, a second background image generator 326 generates a background image based on the second camera 306 and outputs the background image to a second background SLOG filter 328. The second background SLOG filter 328 can generate a number of filtered images of the background image equal to the number of filtered images generated by the second image SLOG filter 316. It is to be understood that each of the first background image generator 322 and the second background image generator 326 can be substantially similar to the background image generator 50 as described in the example of
The first filtered image comparator 312 and the second filtered image comparator 318 are each configured to compare the filtered images generated by the first image SLOG filter 310 and the second image SLOG filter 316, respectively, with the filtered background images generated by the first background SLOG filter 324 and the second background SLOG filter 328, respectively. To obtain a more accurate comparison, it is to be understood that each of the filtered images of each respective resolution can be concurrently compared. Similar to the image comparator 160 in the example of
The difference images are output from the filtered image comparator 312 and the filtered image comparator 318 to the respective token locaters 314 and 320. In addition, the difference images are also output from the filtered image comparator 312 and the filtered image comparator 318 to a location acquisition engine 330. Similar to as described above in the example of
The location acquisition engine 330 includes an image correlator 332 and a dimension conversion engine 334. The image correlator 332 receives the difference images from the respective filtered image comparator 312 and the filtered image comparator 318 as inputs, as well as the two-dimensional pixel location information of the respective candidate tokens from the respective token locaters 314 and 320. The image correlator 332 overlays the difference images at the pixel locations of the respective candidate tokens, as communicated by the respective token locaters 314 and 320. The image correlator 332 then applies a pixel shifting algorithm to determine the optimal correlation of the pair of difference images, similar the image correlator 200 described in the above example of
The number of pixels of offset between the difference images for optimal correlation is output from the image correlator 332 to the dimension conversion engine 334. The dimension conversion engine 334 examines the pixel offset in the correlation of the difference image pair and converts the pixel offset into a unit of measure that corresponds to an amount of range that the potential intruder is away from the stereo image acquisition stage 302. The range can then be used to determine a three-dimensional location and size of the potential intruder, similar to as described above in the example of
The visual images acquired by the first camera 354 are output to the first image SLOG filter 360, and the visual images acquired by the second camera 356 are output to the second image SLOG filter 366. The first image SLOG filter 360 and the second image SLOG filter 366 can each generate one or more filtered images of the respective acquired images from the first camera 354 and the second camera 356. For example, each of the first image SLOG filter 360 and the second image SLOG filter 366 can generate a high-resolution filtered image, a medium-resolution filtered image, and a low-resolution filtered image, such as described above in the example of
A first background image generator 372 generates a background image based on the first camera 354 and outputs the background image to a first background SLOG filter 374. In a likewise manner, a second background image generator 376 generates a background image based on the second camera 356 and outputs the background image to a second background SLOG filter 378. It is to be understood that each of the first background image generator 372 and the second background image generator 376 can be substantially similar to the background image generator 50 as described in the example of
The first filtered image comparator 362 and the second filtered image comparator 368 are each configured to compare the filtered images generated by the first image SLOG filter 360 and the second image SLOG filter 366, respectively, with the filtered background images generated by the first background SLOG filter 374 and the second background SLOG filter 378, respectively. Similar to the image comparator 160 in the example of
The difference images are output from the filtered image comparator 362 and the filtered image comparator 368 to the respective texture difference locaters 364 and 370. Similar to as described above in the example of
The first background image generator 372 can be further configured to receive the two-dimensional location information of the texture differences from the first texture difference locater 364 to rapidly update the first background image. Likewise, the second background image generator 376 can be further configured to receive the two-dimensional location information of the texture differences from the second texture difference locater 370 to rapidly update the second background image. For example, the first background image generator 372 and the second background image generator 376 can each receive the respective two-dimensional locations of the texture differences at a background image updater 56 in the example of
As described above, a single marked difference on a filtered difference image could correspond to several pixels of an image acquired by a respective one of the cameras 354 and 356. Therefore, it is to be understood that the respective one of the background image generators 372 and 376 can translate the low-resolution two-dimensional location information into an actual pixel location on an image acquired from the respective one of the cameras 354 and 356. As such, the first background image generator 372 can update all pixels of the background image with an image acquired by the first camera 354 except for the pixels corresponding to the two-dimensional location of the texture differences. Likewise, the second background image generator 376 can update all pixels of the background image with an image acquired by the second camera 356 except for the pixels corresponding to the two-dimensional location of the texture differences. It is to be understood that the rapid update of the background images based on the filtered difference images can work in addition to, or instead of, the gradual updates to the acquired background image 54 by the background image updater 56, as described in the example of
In addition to the components described above, the token detector 358 also includes a first image comparator 380, a first image filler 382, and a first token locater 384. The token detector 358 further includes a second image comparator 386, a second image filler 388, and a second token locater 390. Upon rapidly updating the respective first and second background images, the acquired image from the first camera 354, as well as the first updated background image, is input to the first image comparator 380. Likewise, the acquired image from the second camera 356, as well as the second updated background image, is input to the second image comparator 386.
Similar to the image comparator 160 in the example of
The first image filler 382 and the second image filler 388 can each apply a pixel filling algorithm to the respective difference images, such that it can be determined whether a candidate token exists on each of the respective difference images. The filled-in difference images are output from the image filler 382 and the image filler 388 to the respective token locaters 384 and 390. The filled-in difference images can be examined by the respective token locaters 384 and 390 to determine the presence of a candidate token on each of the filled-in difference images. In addition, the filled-in difference images can be output from the image filler 382 and the image filler 388 to a location acquisition engine (not shown), similar to as described above regarding the example of
It is to be understood that the stereo camera intrusion detection system 350 is not limited to the example of
In view of the foregoing structural and functional features described above, a methodology in accordance with various aspects of the present invention will be better appreciated with reference to
At 408, pixels of the acquired images are compared with pixels of the background image to determine the presence of a potential intruder. The acquired images could be both the first acquired images and the second acquired images compared with respective background images. Alternatively, the acquired images can be just the first acquired images compared with a single background image. In addition, the acquired images and the respective background images can be filtered, such that the filtered acquired images are compared with the respective filtered background images.
At 410, three-dimensional characteristics associated with the potential intruder are determined. The determination can be based on correlating filtered versions of the acquired images based on the comparison of the acquired images and the background image. The correlation can be based on generating correlation scores at each shift of a pixel shift algorithm. The correlation can also occur between two separate difference images. The amount of pixel offset between the first images and the second images can be translated to a three-dimensional location and size of the potential intruder.
At 412, an indicator is activated upon the three-dimensional characteristics of the potential intruder exceeding at least one threshold. The threshold could correspond to size and/or location of the potential intruder. In addition, the location of the potential intruder can be tracked across several of the first and second images of the first and second cameras, such that three-dimensional direction of motion and velocity can be determined. Thus, another threshold can be velocity, or motion toward a predefined three-dimensional space.
What have been described above are examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.