Video safety curtain
First Claim
1. A method of implementing a machine vision system to compare a model of a 3-D reference target in a viewed scene to a runtime scene, said method comprising:
- storing information related to said model of said 3-D reference target, said model including a set of 3-D points related to said 3-D reference target;
acquiring information related to said runtime scene;
processing said information related to said runtime scene to form stereoscopic information including a set of 3-D points related to said runtime scene;
comparing said set of 3-D points related to said 3-D reference target with said set of 3-D points related to said runtime scene; and
defining any 3-D entity in said runtime scene other than said 3-D reference target as an intruder;
wherein the step of storing information related to said model of said 3-D reference target involves generating said set of 3-D points in the form of a first set of 3-D objects related to said 3-D reference target using a first clustering algorithm, and/or the step of acquiring information related to said runtime scene involves generating said set of 3-D points in the form of a second set of 3-D objects related to said runtime scene using a second clustering algorithm.
1 Assignment
0 Petitions
Accused Products
Abstract
A three-dimensional (3-D) machine-vision safety-solution involving a method and apparatus for performing high-integrity, high efficiency machine vision. The machine vision safety solution converts two-dimensional video pixel data into 3-D point data that is used for characterization of specific 3-D objects, their orientation, and other object characteristics for any object, to provide a video safety “curtain.” A (3-D) machine-vision safety-solution apparatus includes an image acquisition device arranged to view a target scene stereoscopically and pass the resulting multiple video output signals to a computer for further processing. The multiple video output signals are connected to the input of a video processor adapted to accept the video signals. Video images from each camera are then synchronously sampled, captured, and stored in a memory associated with a general purpose processor. The digitized image in the form of pixel information can then be stored, manipulated and otherwise processed in accordance with capabilities of the vision system. The machine vision safety solution method and apparatus involves two phases of operation: training and run-time.
-
Citations
17 Claims
-
1. A method of implementing a machine vision system to compare a model of a 3-D reference target in a viewed scene to a runtime scene, said method comprising:
-
storing information related to said model of said 3-D reference target, said model including a set of 3-D points related to said 3-D reference target;
acquiring information related to said runtime scene;
processing said information related to said runtime scene to form stereoscopic information including a set of 3-D points related to said runtime scene;
comparing said set of 3-D points related to said 3-D reference target with said set of 3-D points related to said runtime scene; and
defining any 3-D entity in said runtime scene other than said 3-D reference target as an intruder;
wherein the step of storing information related to said model of said 3-D reference target involves generating said set of 3-D points in the form of a first set of 3-D objects related to said 3-D reference target using a first clustering algorithm, and/or the step of acquiring information related to said runtime scene involves generating said set of 3-D points in the form of a second set of 3-D objects related to said runtime scene using a second clustering algorithm. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
collecting a plurality of images of said 3-D reference target during a training phase; and
processing said plurality of images for stereoscopic information to develop said set of 3-D points corresponding to the 3-D reference target.
-
-
3. The method of claim 1 in which said step of acquiring information related to said runtime scene further comprises the step of:
collecting a plurality of successive images of said runtime scene in a runtime phase, where said runtime scene contains at least said 3reference target.
-
4. The method of claim 1 further comprising the step of:
subtracting the information related to said model of said 3-D reference target from the information related to said runtime scene to reduce information prior to said step of processing said information.
-
5. The method of claim 1 in which the step of comparing further comprises the step of:
calculating a 3-D distance from said 3-D reference target to each intruder.
-
6. The method of claim 1 further including the step of generating an output corresponding to a 3-D position of any said intruder relative to said 3-D reference target.
-
7. The method of claim 6 in which said step of generating said output corresponding to said 3-D position of any said intruder further comprises the steps of:
-
calculating a 3-D distance between each 3-D point of said 3-D reference target and each 3D point of said intruder to create a set of distances including a shortest distance; and
determining whether said shortest distance is less than a predetermined threshold distance.
-
-
8. The method of claim 1 in which said step of storing information related to said model of said 3-D reference target further comprises the steps of:
-
focusing a stereoscopic camera on said viewed scene;
collecting a substantially synchronous plurality of frames of video of said viewed scene;
digitizing said plurality of frames to create a set of digitized frames forming said information related to said model.
-
-
9. The method of claim 1 in which said step of acquiring information related to said runtime scene further comprises the steps of:
-
focusing a stereoscopic camera on said runtime scene;
collecting a substantially synchronous plurality of frames of video of said runtime scene;
digitizing said plurality of frames to create a set of digitized frames forming said information related to said run-time scene.
-
-
10. The method of claim 9 further comprising the steps of:
-
storing said set of digitized frames in a memory; and
repeating said collecting, digitizing and storing steps for each of a plurality of runtime scenes.
-
-
11. A method of implementing a machine vision system to detect an intruder in a viewed scene, said method comprising the steps of:
-
developing a 3-D reference model of said viewed scene, said reference model including a set of 3-D reference points;
acquiring a runtime version of said viewed scene, said runtime version including a set of 3-D runtime points;
comparing said set of 3-D reference points to said set of 3-D runtime points to determine a difference between said set of 3-D reference points and said set of 3-D runtime points; and
obtaining a position of any said intruder in said viewed scene as a function of said difference between said set of 3-D reference points and said set of 3-D runtime points;
wherein the step of developing said 3-D reference model involves generating said set of 3-D reference points in the form of a first set of 3-D objects using a first clustering algorithm, and/or the step of acquiring a runtime version involves generating said set of 3-D runtime points in the form of a second set of 3-D objects using a second clustering algorithm. - View Dependent Claims (12, 13, 14, 15, 16)
collecting a plurality of images of said viewed scene during a training phase;
processing said plurality of images for stereoscopic information about any entity within the viewed scene to develop said set of 3-D reference points.
-
-
13. The method of claim 11 in which said step of acquiring said runtime version of said viewed scene further comprises the steps of:
-
collecting a plurality of images of said viewed scene in a runtime phase;
processing said plurality of images for stereoscopic information about any entity within the viewed scene to determine said set of 3-D runtime points.
-
-
14. The method of claim 11 further comprising the step of:
subtracting said set of 3-D reference points from said set of 3-D runtime points to reduce information prior to said step of comparing.
-
15. The method of claim 11 further including the step of generating an output corresponding to a 3-D position of any said intruder relative to said 3-D reference model.
-
16. The method of claim 15 in which said step of generating said output corresponding to said 3-D position of any said intruder further comprises the steps of:
-
calculating a 3-D distance between each 3-D point of said 3-D reference model and each 3-D point of said intruder to create a set of distances including a shortest distance; and
determining whether said shortest distance is less than a predetermined threshold distance.
-
-
17. A machine vision apparatus to detect an intruder in a viewed scene, comprising;
-
an image acquisition device;
a processor including, means for developing a 3-D reference model of said viewed scene including a set of 3-D reference points;
means for acquiring a runtime version of said viewed scene including a set of 3-D runtime points; and
means for comparing said set of 3-D reference points to said set of 3-D runtime points to determine a difference between said set of 3-D reference points and said set of 3-D runtime points;
said apparatus further comprising at least one means for generating said set of 3-D reference points in the form of a first set of 3-D objects using a first clustering algorithm, and means for generating said set of 3-D runtime points in the form of a second set of 3-D objects using a second clustering algorithm.
-
Specification