DATA ACQUISITION FOR MACHINE PERCEPTION SYSTEMS
First Claim
1. A user device, comprising:
- a processor implementing an expression classifier;
a display;
a camera; and
a network interface;
wherein the processor of the user device is configured to execute code to evaluate an image captured by the camera for presence of a predetermined expression using the expression classifier, and to allow a user of the user device to view on the display the image captured by the camera, and to transmit over a network at least one of (1) the image captured by the camera, and (2) depersonalized data derived from the image, to a computer system in response to identification by the user of a discrepancy between output of the expression classifier corresponding to the image captured by the camera and appearance of the image captured by the camera.
2 Assignments
0 Petitions
Accused Products
Abstract
Apparatus, methods, and articles of manufacture for obtaining examples that break a visual expression classifier at user devices such as tablets, smartphones, personal computers, and cameras. The examples are sent from the user devices to a server. The server may use the examples to update the classifier, and then distribute the updated classifier code and/or updated classifier parameters to the user devices. The users of the devices may be incentivized to provide the examples that break the classifier, for example, by monetary reward, access to updated versions of the classifier, public ranking or recognition of the user, a self-rewarding game. The examples may be evaluated using a pipeline of untrained crowdsourcing providers and trained experts. The examples may contain user images and/or depersonalized information extracted from the user images.
24 Citations
19 Claims
-
1. A user device, comprising:
-
a processor implementing an expression classifier; a display; a camera; and a network interface; wherein the processor of the user device is configured to execute code to evaluate an image captured by the camera for presence of a predetermined expression using the expression classifier, and to allow a user of the user device to view on the display the image captured by the camera, and to transmit over a network at least one of (1) the image captured by the camera, and (2) depersonalized data derived from the image, to a computer system in response to identification by the user of a discrepancy between output of the expression classifier corresponding to the image captured by the camera and appearance of the image captured by the camera. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method for providing machine learning examples, the method comprising:
-
capturing an image with a camera of a computerized user device; evaluating the image captured by the camera for presence of a predetermined expression using an expression classifier of the predetermined expression; displaying to a user of the computerized user device the image captured by the camera; receiving from the user an indication of a first discrepancy between output of the expression classifier corresponding to the image captured by the camera and appearance of the image captured by the camera of the user device; and transmitting at least one of (1) the image captured by the camera, and (2) depersonalized data derived from the image captured by the camera, to a computer system over a network, the step of transmitting being performed in response to the identification by the user of the first discrepancy between the output of the expression classifier corresponding to the image captured by the camera and the appearance of the image captured by the camera; wherein the steps of evaluating, displaying, receiving, and transmitting are performed by the computerized user device. - View Dependent Claims (9, 10, 11, 12, 13, 18)
-
-
14. A computer-implemented method comprising steps of:
-
receiving at a computer system from a plurality of users of a first plurality of user devices a plurality of images, each user device of the first plurality of user devices comprising a classifier of expression of a predetermined emotion, affective state, or action unit; checking the images of the plurality of images with a computer system classifier of expression of the predetermined emotion, affective state, or action and discarding images that do not meet a predetermined standard applied to output of the computer system classifier, resulting in objectively qualified images that pass the predetermined standard; sending to a plurality of untrained providers requests to rate the objectively qualified images with respect to appearance of the predetermined emotion, affective state, or action unit in the objectively qualified images; receiving ratings of the plurality of untrained providers, in response to the requests; applying a first quality check to the images of the plurality of images rated by the plurality of untrained providers, the first quality check being based on the ratings of the plurality of untrained providers, the step of applying the first quality check resulting in a plurality of images that passed the first quality check; sending the plurality of images that passed the first quality check to one or more experts, for rating the images of the plurality of images that passed the first quality check by the one or more experts with respect to appearance of the predetermined emotion, affective state, or action unit in the images of the plurality of images that passed the first quality check; receiving ratings from the one or more experts, in response to the step of sending the plurality of images that passed the first quality check; applying a second quality check to the images rated by the one or more experts, the second quality check being based on the ratings of the one or more experts, the step of applying the second quality check resulting in one or more images that passed the second quality check; and at least one of (1) transmitting the one or more images that passed the second quality check to a distribution server, and (2) storing the one or more images that passed the second quality check. - View Dependent Claims (15, 16, 17, 19)
-
Specification