Low threshold face recognition
First Claim
1. A method performed by an image processor, the method comprising:
- processing a captured image of a face of a user seeking to access a resource by conforming a subset of the captured face image to a reference model, the reference model corresponding to a high information portion of human faces, and the high information portion including eyes and a mouth of a face depicted in a reference image, where the processing of the captured image comprisesdetecting a face within the captured image by identifying the eyes in an upper one third of the captured image and the mouth in a lower third of the captured image, andmatching the eyes of the detected face with the eyes of the face depicted in the reference image to obtain a normalized image of the detected face;
comparing the processed image to at least one target profile corresponding to a user associated with the resource, wherein the comparing of the processed image comprisesobtaining a difference image of the detected face by subtracting the normalized image of the detected face from a normalized image of a target face associated with a target profile, andcalculating scores of respective pixels of the difference image based on a weight defined according to proximity of the respective pixels to high information portions of the human faces, wherein the weight decreases from a maximum weight value at a mouth-level to a minimum weight value at an eyes-line; and
selectively recognizing the user seeking access to the resource based on a result of the comparing.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods, systems , and apparatus, including computer programs encoded on a computer storage medium, are disclosed for reducing the impact of lighting conditions and biometric distortions, while providing a low-computation solution for reasonably effective (low threshold) face recognition. In one aspect, the methods include processing a captured image of a face of a user seeking to access a resource by conforming a subset of the captured face image to a reference model. The reference model corresponds to a high information portion of human faces. The methods further include comparing the processed captured image to at least one target profile corresponding to a user associated with the resource, and selectively recognizing the user seeking access to the resource based on a result of said comparing.
-
Citations
26 Claims
-
1. A method performed by an image processor, the method comprising:
-
processing a captured image of a face of a user seeking to access a resource by conforming a subset of the captured face image to a reference model, the reference model corresponding to a high information portion of human faces, and the high information portion including eyes and a mouth of a face depicted in a reference image, where the processing of the captured image comprises detecting a face within the captured image by identifying the eyes in an upper one third of the captured image and the mouth in a lower third of the captured image, and matching the eyes of the detected face with the eyes of the face depicted in the reference image to obtain a normalized image of the detected face; comparing the processed image to at least one target profile corresponding to a user associated with the resource, wherein the comparing of the processed image comprises obtaining a difference image of the detected face by subtracting the normalized image of the detected face from a normalized image of a target face associated with a target profile, and calculating scores of respective pixels of the difference image based on a weight defined according to proximity of the respective pixels to high information portions of the human faces, wherein the weight decreases from a maximum weight value at a mouth-level to a minimum weight value at an eyes-line; and selectively recognizing the user seeking access to the resource based on a result of the comparing. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. An appliance comprising:
-
a data storage device configured to store profiles of users associated with the appliance; an image capture device configured to acquire color frames; one or more data processors configured to perform operations including; applying an orange-distance filter to a frame acquired by the image capture device; determining respective changes in area and location of a skin-tone orange portion of the acquired frame relative to a previously acquired frame; inferring, based on the determined changes, a presence of a face substantially at rest when the frame was acquired; detecting a face corresponding to the skin-tone orange portion of the acquired frame in response to the inference, the detecting including finding eyes and a mouth within the skin-tone orange portion; normalizing the detected face based on locations of eyes and a mouth of a face in a reference image; analyzing weighted differences between normalized target faces and the normalized detected face, such that weight values assigned to differences between portions of the normalized target faces and corresponding portions of the normalized detected face decrease from a maximum weight value at a mouth-level to a minimum weight value at an eyes-line wherein the target faces are associated with respective users of the appliance; matching the face detected in the acquired frame with one of the target faces based on a result of the analyzing; and acknowledging the match of the detected face in accordance with a profile stored on the data storage device and associated with a user of the appliance having the matched face. - View Dependent Claims (14)
-
-
15. A system comprising:
-
one or more processors; a storage system storing a reference model corresponding to a high information portion of human faces, where the high information portion includes eyes and a mouth of a face depicted in a reference image, and at least one target profile corresponding to a user associated with the system; and a non-transitory computer readable medium encoding instructions that when executed by the one or more processors cause the system to execute operations comprising; processing a captured image of a face of a user seeking to access the system by conforming a subset of the captured face image to the reference model, where the processing of the captured image comprises; detecting a face within the captured image by identifying the eyes in an upper one third of the captured image and the mouth in a lower third of the captured image, and matching the eyes of the detected face with the eyes of the face depicted in the reference image to obtain a normalized image of the detected face, comparing the processed image to the at least one target profile stored on the storage system, wherein the comparing of the processed image comprises; obtaining a difference image of the detected face by subtracting the normalized image of the detected face from a normalized image of a target face associated with the at least one target profile, and calculating scores of respective pixels of the difference image based on a weight defined according to proximity of the respective pixels to high information portions of the human faces, wherein the weight decreases from a maximum weight value at a mouth-level to a minimum weight value at an eyes-line, and selectively recognizing the user seeking access to the system based on a result of the comparing. - View Dependent Claims (16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26)
-
Specification