FACE LIVENESS DETECTION USING BACKGROUND/FOREGROUND MOTION ANALYSIS
First Claim
Patent Images
1. A method of determining face liveness, comprising:
- receiving, at a processor of a face recognition system and from an image capture device of the face recognition system, a time-stamped frame sequence;
identifying corresponding pixels for each pair of sequential frames in the time-stamped frame sequence;
segmenting one of each pair of sequential frames in the time-stamped frame sequence into regions of interest;
calculating a motion feature for each region of interest of each pair of sequential frames in the time-stamped frame sequence;
generating a preliminary face-liveness determination for each pair of sequential frames in the time-stamped frame sequence, based on a comparison of the calculated motion features for each region of interest of the pair of sequential frames; and
making a final face-liveness determination based on the generated preliminary face liveness determinations.
0 Assignments
0 Petitions
Accused Products
Abstract
Face recognition systems are vulnerable to the presentation and spoofed faces, which may be presented to face recognition systems, for example, by an unauthorized user seeking to gain access to a protected resource. A face liveness detection method that addresses this vulnerability utilizes motion analysis to compare the relative movement among three regions of interest in a facial image, and based upon that comparison to make a face liveness determination.
-
Citations
20 Claims
-
1. A method of determining face liveness, comprising:
-
receiving, at a processor of a face recognition system and from an image capture device of the face recognition system, a time-stamped frame sequence; identifying corresponding pixels for each pair of sequential frames in the time-stamped frame sequence; segmenting one of each pair of sequential frames in the time-stamped frame sequence into regions of interest; calculating a motion feature for each region of interest of each pair of sequential frames in the time-stamped frame sequence; generating a preliminary face-liveness determination for each pair of sequential frames in the time-stamped frame sequence, based on a comparison of the calculated motion features for each region of interest of the pair of sequential frames; and making a final face-liveness determination based on the generated preliminary face liveness determinations. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method of training a classifier of a face recognition system, comprising:
-
receiving, at a processor of a face recognition system and from an image capture device associated with the face recognition system, a first plurality of time-stamped frame sequences having a live subject, and a second plurality of time-stamped frame sequences having a spoofed subject; for each of the time-stamped frame sequences in the first and second pluralities of time-stamped frame sequences; identifying corresponding pixels for each pair of sequential frames in the time-stamped frame sequence; segmenting one of each pair of sequential frames in the time-stamped frame sequence into regions of interest; and calculating a motion feature for each region of interest of each pair of sequential frames in the time-stamped frame sequence; storing the calculated motion features for each time-stamped frame sequence from the first plurality of time-stamped frame sequences as positive data, and storing the calculated motion features for each time-stamped frame sequence from the second plurality of time-stamped frame sequences as negative data; and generating training rules based on the positive and negative data. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. A face recognition system comprising:
-
an image capture device; a processor; a face detection unit comprising a memory; and a face matching unit; wherein the memory stores instructions that, when executed by the processor, cause the processor to; receive, from the image capture device, a time-stamped frame sequence; identify corresponding pixels for each pair of sequential frames in the time-stamped frame sequence; segment one of each pair of sequential frames in the time-stamped frame sequence into regions of interest; calculate a motion feature for each region of interest of each pair of sequential frames in the time-stamped frame sequence; generate a preliminary face-liveness determination for each pair of sequential frames in the time-stamped frame sequence, based on a comparison of the calculated motion features for the pair of sequential frames; and make a final face-liveness determination based on the generated preliminary face liveness determinations. - View Dependent Claims (17, 18, 19, 20)
-
Specification