Adaptive artificial vision method and system
First Claim
1. An adaptive artificial vision method comprising the following steps:
- (a) defining successive couples of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) such that the time difference τ
between two synchronized timesteps (t−
1, t;
t, t+1;
. . . ) of a couple of synchronized timesteps is equal to a predetermined time delay τ
0,(b) comparing two successive images (It−
1, It;
It, It+1;
. . . ) at each couple of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) spaced by said predetermined time delay τ
0 for obtaining a delta image Δ
t which is the result of the computation of the distance between each pixel of said two successive images (It−
1, It;
It, It+1;
. . . ) in view of characterizing movements of objects between said two successive images (It−
1, It;
It, It+1;
. . . ),(c) extracting features from said delta image Δ
t for obtaining a potential dynamic patch Pt which is compared with dynamic patches previously recorded in a first repertory Rd which is progressively constructed in real time from an initial void repertory,(d) selecting the closest dynamic patch Di in this first repertory Rd or if not sufficiently close dynamic patch still exists, adding the potential dynamic patch Pt to the first repertory Rd and therefore obtaining and storing a dynamic patch Di from the comparison of two successive images (It−
1, It;
It, It+1;
. . . ) at each couple of synchronized timesteps (t−
1, t;
t, t+1;
. . . ), and(e) temporally integrating stored dynamic patches Di of the first repertory Rd in order to detect and store stable sets of active dynamic patches representing a characterization of a reoccurring movement or event which is observed.
2 Assignments
0 Petitions
Accused Products
Abstract
The adaptive artificial vision method comprises the following steps: (a) defining successive couples of timesteps (t−1, t; t, t+1; . . . ) synchronized by a clock (101), (b) comparing two successive images (It−, It; It, It+1, . . . ) from an input device (102, 103) at each couple of synchronized timesteps (t−1, t; t, t+1; . . . ) spaced by a predetermined time delay τ0 for obtaining a delta image Δt which is the result of the computation of the distance between each pixel of the two successive images (It−1, It; It, It+1, . . . ) in view of characterizing movements of objects, (c) extracting features from the delta image Δt for obtaining a potential dynamic patch Pt which is compared with dynamic patches previously recorded in a repertory which is progressively constructed in real time from an initial void repertory, (d) selecting the closest dynamic patch Di in the repertory or if no sufficiently close dynamic patch still exists, adding the potential dynamic patch Pt to the repertory and therefore obtaining and storing a dynamic patch Di from the comparison of two successive images (It−1, It; It, It+1, . . . ) at each couple of synchronized timesteps (t−1, t; t, t+1; . . . ), and (e) temporally integrating stored dynamic patches Di of the repertory in order to detect and store stable sets of active dynamic patches representing a characterization of a reoccuring movement or event which is observed. A process of static pattern recognition may then be efficiently used.
15 Citations
16 Claims
-
1. An adaptive artificial vision method comprising the following steps:
-
(a) defining successive couples of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) such that the time difference τ
between two synchronized timesteps (t−
1, t;
t, t+1;
. . . ) of a couple of synchronized timesteps is equal to a predetermined time delay τ
0,(b) comparing two successive images (It−
1, It;
It, It+1;
. . . ) at each couple of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) spaced by said predetermined time delay τ
0 for obtaining a delta image Δ
t which is the result of the computation of the distance between each pixel of said two successive images (It−
1, It;
It, It+1;
. . . ) in view of characterizing movements of objects between said two successive images (It−
1, It;
It, It+1;
. . . ),(c) extracting features from said delta image Δ
t for obtaining a potential dynamic patch Pt which is compared with dynamic patches previously recorded in a first repertory Rd which is progressively constructed in real time from an initial void repertory,(d) selecting the closest dynamic patch Di in this first repertory Rd or if not sufficiently close dynamic patch still exists, adding the potential dynamic patch Pt to the first repertory Rd and therefore obtaining and storing a dynamic patch Di from the comparison of two successive images (It−
1, It;
It, It+1;
. . . ) at each couple of synchronized timesteps (t−
1, t;
t, t+1;
. . . ), and(e) temporally integrating stored dynamic patches Di of the first repertory Rd in order to detect and store stable sets of active dynamic patches representing a characterization of a reoccurring movement or event which is observed. - View Dependent Claims (2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
5. An adaptive artificial vision method comprising the following steps:
-
(a) defining successive couples of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) such that the time difference τ
between two synchronized timesteps (t−
1, t;
t, t+1;
. . . ) of a couple of synchronized timesteps is equal to a predetermined time delay τ
0,(b) comparing two successive images (It−
1, It, It, It+1;
. . . ) at each couple of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) spaced by said predetermined time delay τ
0 for obtaining a delta image Δ
t, which is the result of the computation of the distance between each pixel of said two successive images (It−
1, It, It, It+1;
. . . ) in view of characterizing movements of objects between said two successive images (It−
1, It, It, It+1;
. . . ),(c) extracting features from said delta image Δ
t for obtaining a potential dynamic patch Pt which is compared with dynamic patches previously recorded in a first repertory Rd which is progressively constructed in real time from an initial void repertory,(d) selecting the closest dynamic patch Di in this first repertory Rd or if not sufficiently close dynamic patch still exists, adding the potential dynamic patch Pt to the first repertory Rd and therefore obtaining and storing a dynamic patch Di from the comparison of two successive images (It−
1, It, It, It+1;
. . . ) at each couple of synchronized timesteps (t−
t, t+1;
. . . ), and(e) temporally integrating stored dynamic patches Di of the first repertory Rd in order to detect and store stable sets of active dynamic patches representing a characterization of a reoccurring movement or event which is observed, wherein when stable sets of active dynamic patches representing a characterization of a reoccurring movement have been detected, the center of the movement is identified and static patches which are at a predetermined distance d from the movement center and are obtained by a process of static pattern recognition are analyzed to constitute at a given timestep a set of active static patches Si which are stored in a second repertory Rs, and wherein the process of static pattern recognition and production of static patches is initiated at the same time as the process of dynamic movement recognition and production of dynamic patches and when stable sets of active dynamic patches representing a characterization of a reoccuring movement have been detected, the process of static pattern recognition is continued exclusively with static patches which are located in a restricted area of the image which is centered on said identified movement center.
-
-
15. An adaptive artificial vision system comprising:
-
a clock for defining successive couples of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) such that the time difference τ
between two synchronized timesteps (t−
1, t;
t, t+1;
. . . ) of a couple of synchronized timesteps is equal to a predetermined time delay τ
0,inputting means for inputting images (It−
1, It, It, It+1;
. . . ) provided by a camera at said synchronized timesteps (t−
1, t;
t, t+1;
. . . ),first comparator means for comparing two successive images (It−
1, It;
It, It+1;
. . . ) inputted at each couple of synchronized timesteps (t−
1, t;
t, t+1;
. . . ) spaced by said predetermined time delay τ
0 for obtaining a delta image Δ
t which is the result of the computation of the distance between each pixel of said two successive images (It−
1, It;
It, It+1;
. . . ),first memory means (Md) for storing dynamic patches Di representing elementary visual parts for describing characterized movements of objects, feature extraction means for extracting features from said delta image Δ
t and producing a potential dynamic patch Pt,second comparator means for comparing a potential dynamic patch Pt which is compared with dynamic patches previously recorded in said first memory means (Md), selection means for selecting the closest dynamic patches Di in the first memory means (Md) or if no sufficiently close dynamic patch still exists, for recording the potential dynamic patch Pt into the first memory means so that a dynamic patch Di is stored in the first memory means for each comparison of two successive images (It−
1, It;
It, It+1;
. . . ) at each couple of synchronized timesteps (t−
1, t;
t, t+1;
. . . ),first temporal integrations means comprising computing means for computing during a time TF1 corresponding to a predetermined number N1 of couples of synchronized timesteps the frequency of each dynamic patch Di stored in the first memory means and threshold means for defining a set of active dynamic patches comprising dynamic patches D1 whose frequency is higher than a predetermined threshold, and, second temporal integration means comprising computing means for computing during a time TF2 corresponding to a predetermined number N2 of couples of synchronized timesteps the frequency of each set of defined active dynamic patches and threshold means for defining a stable set of dynamic patches corresponding to a reoccuring movement for each set of active dynamic patches whose frequency is higher than a predetermined threshold. - View Dependent Claims (16)
-
Specification