User identity authentication techniques for on-line content or access
First Claim
Patent Images
1. A multi-modal authentication method, comprising:
- initially capturing an enrollment dataset of at least first- and second-type biometrics characteristic of a particular user;
subsequent to the initial capturing, simultaneously capturing by way of one or more computing device interfaces, (i) a first dataset corresponding to the first-type biometric and (ii) a second dataset corresponding to the second-type biometric, wherein the first-type biometrics include audio features extracted from vocals of the user,wherein the second-type biometrics include visual features extracted from an image or video of the user, andwherein the first and second datasets are coherent with each other, time-aligned and correspond to a same interactive response by the user to tracking of an on-screen moving target using an on-screen aiming mechanism for detecting and tracking facial landmarks employed to improve quality or uniformity of the image or video of the user;
computationally determining correspondence of the first- and second-type biometrics with the enrollment dataset;
authenticating an identity of the user based on the determined correspondence (i) between the first- and second-type biometrics with the enrollment dataset and (ii) between the time-aligned audio and visual features.
1 Assignment
0 Petitions
Accused Products
Abstract
On-line course offerings can be made available to users using computational techniques that reliably authenticate the identity of individual student users during the course of the very submissions and/or participation that will establish student user proficiency with course content. Authentication methods and systems include applications of behavioral biometrics.
-
Citations
25 Claims
-
1. A multi-modal authentication method, comprising:
-
initially capturing an enrollment dataset of at least first- and second-type biometrics characteristic of a particular user; subsequent to the initial capturing, simultaneously capturing by way of one or more computing device interfaces, (i) a first dataset corresponding to the first-type biometric and (ii) a second dataset corresponding to the second-type biometric, wherein the first-type biometrics include audio features extracted from vocals of the user, wherein the second-type biometrics include visual features extracted from an image or video of the user, and wherein the first and second datasets are coherent with each other, time-aligned and correspond to a same interactive response by the user to tracking of an on-screen moving target using an on-screen aiming mechanism for detecting and tracking facial landmarks employed to improve quality or uniformity of the image or video of the user; computationally determining correspondence of the first- and second-type biometrics with the enrollment dataset; authenticating an identity of the user based on the determined correspondence (i) between the first- and second-type biometrics with the enrollment dataset and (ii) between the time-aligned audio and visual features. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. An authentication method, comprising:
-
capturing, by way of a computing device or an audio interface, a sequence of audio features extracted from vocals of a particular user, wherein the captured sequence of audio features corresponds to a training speech sequence, and wherein the captured sequence of audio features includes one or more of a frequency domain spectrum, a pitch, a power, a tone, and a cadence of the vocals of the particular user; capturing, by way of the computing device or an image or video interface, a sequence of visual features extracted from an image or video of the particular user, wherein the captured sequence of visual features includes one or more facial movements corresponding to the captured sequence of audio features; from the captured sequence of corresponding audio and visual features, computationally evaluating at least a portion of the training speech sequence including computational evaluation of the corresponding facial movements and the one or more of the frequency domain spectrum, the pitch, the power, the tone, and the cadence of the vocals of the particular user, thereby determining an audio-visual biometric characteristic of the particular user and suitable to discriminate the training speech sequence spoken by the particular user from the training speech sequence spoken by a statistically significant set of other users; storing the audio-visual biometric for use in future authentication of the particular user based on an authentication speech sequence spoken by the particular user, wherein the authentication speech sequence does not directly correspond to the training speech sequence, but shares the computationally evaluated portion of the training speech sequence determined to provide the audio-visual biometric; and based on the stored audio-visual biometric, computationally evaluating portions of speech sequences, in the course of an interactive session, by a user at a computing device or interface and, based on correspondence with the audio-visual biometric, authenticating or confirming authentication of the particular user; wherein during the interactive session, the user provides an interactive response including tracking of an on-screen moving target using an on-screen aiming mechanism for detecting and tracking facial landmarks employed to improve quality or uniformity of the image or video of the particular user. - View Dependent Claims (16, 17, 18, 19, 20)
-
-
21. An authentication method, comprising:
-
generating and supplying, at a computing device or interface, an on-screen moving target in a first position and an on-screen aiming mechanism for detecting and tracking facial landmarks in image sequences captured with an image sensor, wherein the detecting and tracking facial landmarks is used for aiming at the on-screen moving target in the first position; capturing, by way of a computing device having or interface coupled to an image sensor directed toward a particular user, motion of visual features including gross user motion or fine user motion including motion of one or more facial landmarks; based on the captured motion of the visual features, moving the on-screen aiming mechanism correspondingly to, in magnitude and direction, the captured motion of the visual features, wherein the motion of the visual features and corresponding movement of the on-screen aiming mechanism brings the on-screen aiming mechanism into at least a partially overlapping alignment with the on-screen moving target in the first position; and with the on-screen aiming mechanism in the at least partially overlapping alignment with the on-screen moving target in the first position, capturing by way of the computing device or interface, an image of the particular user in a first position, wherein the image of the particular user in the first position is computationally evaluated and a corresponding first position score is generated, where the generated first position score is used for user recognition and authentication. - View Dependent Claims (22, 23, 24, 25)
-
Specification