COGNITIVE MODEL FOR A MACHINE-LEARNING ENGINE IN A VIDEO ANALYSIS SYSTEM
First Claim
1. A method for processing data generated from a sequence of video frames, the method comprising:
- receiving, as a trajectory for a first object, a series of primitive events associated with a path of the first object depicted in the sequence of video frames as the first object moves through the scene, wherein each primitive event includes at least an object type and a set of one or more kinematic variables associated with the second object;
after receiving the trajectory for the first object, receiving a first vector representation generated for the first object, wherein the first vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and wherein the streams describe actions of at least the first object depicted in the sequence of video frames;
exciting one or more nodes of a perceptual associative memory using the trajectory and the first vector representation;
identifying, based on the one or more excited nodes, a percept;
copying the percept to a workspace;
in response to copying the percept to the workspace, selecting a codelet, wherein the codelet includes an executable sequence of instructions; and
invoking execution of the codelet.
6 Assignments
0 Petitions
Accused Products
Abstract
A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
88 Citations
27 Claims
-
1. A method for processing data generated from a sequence of video frames, the method comprising:
-
receiving, as a trajectory for a first object, a series of primitive events associated with a path of the first object depicted in the sequence of video frames as the first object moves through the scene, wherein each primitive event includes at least an object type and a set of one or more kinematic variables associated with the second object; after receiving the trajectory for the first object, receiving a first vector representation generated for the first object, wherein the first vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and wherein the streams describe actions of at least the first object depicted in the sequence of video frames; exciting one or more nodes of a perceptual associative memory using the trajectory and the first vector representation; identifying, based on the one or more excited nodes, a percept; copying the percept to a workspace; in response to copying the percept to the workspace, selecting a codelet, wherein the codelet includes an executable sequence of instructions; and invoking execution of the codelet. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A computer-readable storage medium containing a program, which, when executed on a processor is configured to perform an operation for processing data generated from a sequence of video frames, the operation comprising:
-
receiving, as a trajectory for a first object, a series of primitive events associated with a path of the first object depicted in the sequence of video frames as the first object moves through the scene, wherein each primitive event includes at least an object type and a set of one or more kinematic variables associated with the second object; after receiving the trajectory for the first object, receiving a first vector representation generated for the first object, wherein the first vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and wherein the streams describe actions of at least the first object depicted in the sequence of video frames; exciting one or more nodes of a perceptual associative memory using the trajectory and the first vector representation; identifying, based on the one or more excited nodes, a percept; copying the percept to a workspace; in response to copying the percept to the workspace, selecting a codelet, wherein the codelet includes an executable sequence of instructions; and invoking execution of the codelet. - View Dependent Claims (14, 15, 16, 17, 18, 19, 20, 21, 22)
-
-
23. A system, comprising:
-
a video input source; a processor; and a memory storing a machine learning engine, wherein the machine learning engine is configured to; receive, as a trajectory for a first object, a series of primitive events associated with a path of the first object depicted in the sequence of video frames as the first object moves through the scene, wherein each primitive event includes at least an object type and a set of one or more kinematic variables associated with the second object; after receiving the trajectory for the first object, receive a first vector representation generated for the first object, wherein the first vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and wherein the streams describe actions of at least the first object depicted in the sequence of video frames; excite one or more nodes of a perceptual associative memory using the trajectory and the first vector representation; identify, based on the one or more excited nodes, a percept; copy the percept to a workspace; in response to copying the percept to the workspace, select a codelet, wherein the codelet includes an executable sequence of instructions; and invoke execution of the codelet. - View Dependent Claims (24, 25, 26, 27)
-
Specification