Multi-camera vehicle identification system
First Claim
1. A method of identifying in one or more images captured by a second camera a target object captured by a first camera, the method implemented on a processor and comprising:
- storing in a non-transitory computer readable memory operatively coupled to the processor a first set of trained classifiers and a second set of trained classifiers, wherein the 1st set specifies values corresponding to a first plurality of attributes usable for identifying objects captured by the first camera, and the second set differs from the first set and specifies values corresponding to a second plurality of attributes usable for identifying objects captured by the second camera, wherein the first set of trained classifiers and second set of trained classifiers are trained independently, and wherein the first plurality of attributes and second plurality of attributes have at least one attribute in common;
generating, by the processor, using one or more images captured by the 1st camera, a reference platoon of n objects, the reference platoon comprising the target object and (n−
1) other objects;
generating, by the processor, a reference group by running the first set of trained classifiers over the reference platoon, the reference group being indicative of values of attributes specified by the 1st set of trained classifiers and characterizing the objects in the reference platoon;
generating, by the processor, using one or more images captured by the 2nd camera, a plurality of candidate platoons, each candidate platoon comprising n objects, wherein the one or more images are captured by the 2nd camera in a time window corresponding to the time of capturing by the 1st camera the one or more images used for generating the reference platoon;
generating, by the processor, a plurality of candidate groups, each candidate group obtained by running the 2nd set of trained classifiers over a respective candidate platoon, each candidate group being indicative of values of attributes specified by the 2nd set of trained classifiers and characterizing the objects in the corresponding candidate platoon;
selecting, by the processor, a candidate platoon corresponding to a candidate group best matching the reference group;
identifying, by the processor, the target object in the selected candidate platoon in accordance with a position of the target object in the reference platoon.
1 Assignment
0 Petitions
Accused Products
Abstract
A target object captured by a first camera is identified in images captured by a second camera. A reference platoon comprising the target object and other objects is generated using first camera images. A reference group characterizing the objects in the reference platoon is generated by running a first set of trained classifiers over the reference platoon, the first set of trained classifiers trained to characterize objects captured by the first camera. Candidate platoons are generated using second camera images. Candidate groups characterizing objects in the candidate platoons are obtained by running an independently trained second set of classifiers over the candidate platoons, the second set characterizing objects captured by the second camera. Candidate groups are compared to the reference group, and a best matching candidate platoon is selected. The target object is identified in the selected candidate platoon based on the object'"'"'s position in the reference platoon.
-
Citations
21 Claims
-
1. A method of identifying in one or more images captured by a second camera a target object captured by a first camera, the method implemented on a processor and comprising:
-
storing in a non-transitory computer readable memory operatively coupled to the processor a first set of trained classifiers and a second set of trained classifiers, wherein the 1st set specifies values corresponding to a first plurality of attributes usable for identifying objects captured by the first camera, and the second set differs from the first set and specifies values corresponding to a second plurality of attributes usable for identifying objects captured by the second camera, wherein the first set of trained classifiers and second set of trained classifiers are trained independently, and wherein the first plurality of attributes and second plurality of attributes have at least one attribute in common; generating, by the processor, using one or more images captured by the 1st camera, a reference platoon of n objects, the reference platoon comprising the target object and (n−
1) other objects;generating, by the processor, a reference group by running the first set of trained classifiers over the reference platoon, the reference group being indicative of values of attributes specified by the 1st set of trained classifiers and characterizing the objects in the reference platoon; generating, by the processor, using one or more images captured by the 2nd camera, a plurality of candidate platoons, each candidate platoon comprising n objects, wherein the one or more images are captured by the 2nd camera in a time window corresponding to the time of capturing by the 1st camera the one or more images used for generating the reference platoon; generating, by the processor, a plurality of candidate groups, each candidate group obtained by running the 2nd set of trained classifiers over a respective candidate platoon, each candidate group being indicative of values of attributes specified by the 2nd set of trained classifiers and characterizing the objects in the corresponding candidate platoon; selecting, by the processor, a candidate platoon corresponding to a candidate group best matching the reference group; identifying, by the processor, the target object in the selected candidate platoon in accordance with a position of the target object in the reference platoon. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method of identifying in one or more images captured by a second camera a target platoon of n objects corresponding to a reference platoon of n objects generated using images captured by a first camera, the method implemented by a processor and comprising:
-
storing in a non-transitory computer readable memory operatively coupled to the processor a first set of trained classifiers and a second set of trained classifiers, wherein the 1st set specifies values corresponding to a first plurality of attributes usable for identifying objects captured by the first camera, and the 2nd set differs from the first set and specifies values corresponding to a second plurality of attributes usable for identifying objects captured by the 2nd camera, wherein the first set of trained classifiers and second set of trained classifiers are trained independently, and wherein the first plurality of attributes and second plurality of attributes have at least one attribute in common; generating, by the processor, a reference group by running the first set of trained classifiers over the reference platoon, the reference group being indicative of values of attributes specified by the 1st set of trained classifiers and characterizing the objects in the reference platoon; generating, by the processor, using one or more images captured by the 2nd camera, a plurality of candidate platoons, each candidate platoon comprising n objects, wherein the one or more images are captured by the 2nd camera in a time window corresponding to the time of capturing by the 1st camera the one or more images used for generating the reference platoon; generating, by the processor, a plurality of candidate groups, each candidate group obtained by running the 2nd set of trained classifiers over a respective candidate platoon, each candidate group being indicative of values of attributes specified by the 2nd set of trained classifiers and characterizing the objects in the corresponding candidate platoon; selecting, by the processor, a candidate platoon corresponding to a candidate group best matching the reference group; identifying, by the processor, the selected candidate platoon as the target platoon. - View Dependent Claims (13, 14)
-
-
15. A system for identifying an object in a group of objects appearing in a plurality of cameras comprising:
-
a first camera; a second camera; a memory; and a processing unit communicatively coupled to the first camera, the second camera, and the memory, the processing unit comprising a processor configured to; store in the memory a first set of trained classifiers and a second set of trained classifiers, wherein the 1st set specifies values corresponding to a first plurality of attributes usable for identifying objects captured by the first camera, and the 2nd set differs from the first set and specifies values corresponding to a second plurality of attributes usable for identifying objects captured by the 2nd camera, wherein the first set of trained classifiers and second set of trained classifiers are trained independently, and wherein the first plurality of attributes and second plurality of attributes have at least one attribute in common; generate, using one or more images captured by the 1st camera, a reference platoon of n objects, the reference platoon comprising a target object and (n−
1) other objects;generate a reference group by running the first set of trained classifiers over the reference platoon, the reference group being indicative of values of attributes classified by the 1st set of trained classifiers and characterizing the objects in the reference platoon; generate, using one or more images captured by the 2nd camera, a plurality of candidate platoons, each candidate platoon comprising n objects, wherein the one or more images are captured by the 2nd camera in a time window corresponding to the time of capturing by the 1st camera the one or more images used for generating the reference platoon; generate a plurality of candidate groups, each candidate group obtained by running the 2nd set of trained classifiers over a respective candidate platoon, each candidate group being indicative of values of attributes classified by the 2nd set of trained classifiers and characterizing the objects in the corresponding candidate platoon; select a candidate platoon corresponding to a candidate group best matching the reference group; identify the target object in the selected candidate platoon in accordance with a position of the target object in the reference platoon. - View Dependent Claims (16, 17, 18, 19)
-
-
20. A non-transitory storage medium comprising instructions that when executed by a processor, cause the processor to:
-
store in a memory a first set of trained classifiers and a second set of trained classifiers, wherein the 1st set specifies values corresponding to a first plurality of attributes usable for identifying objects captured by a first camera, and the 2nd set differs from the first set and specifies values corresponding to a second plurality of attributes usable for identifying objects captured by a 2nd camera, wherein the first set of trained classifiers and second set of trained classifiers are trained independently, and wherein the first plurality of attributes and second plurality of attributes have at least one attribute in common; generate, using one or more images captured by the 1st camera, a reference platoon of n objects, the reference platoon comprising a target object and (n−
1) other objects;generate a reference group by running the first set of trained classifiers over the reference platoon, the reference group being indicative of values of attributes classified by the 1st set of trained classifiers and characterizing the objects in the reference platoon; generate, using one or more images captured by the 2nd camera, a plurality of candidate platoons, each candidate platoon comprising n objects, wherein the one or more images are captured by the 2nd camera in a time window corresponding to the time of capturing by the 1st camera the one or more images used for generating the reference platoon; generate a plurality of candidate groups, each candidate group obtained by running the 2nd set of trained classifiers over a respective candidate platoon, each candidate group being indicative of values of attributes classified by the 2nd set of trained classifiers and characterizing the objects in the corresponding candidate platoon; select a candidate platoon corresponding to a candidate group best matching the reference group; identify the target object in the selected candidate platoon in accordance with a position of the target object in the reference platoon. - View Dependent Claims (21)
-
Specification