Managing sensory information of a user device
First Claim
1. A computer-implemented method, comprising:
- monitoring, by a first sensor of a mobile device, for an object external to and independent of contact with a screen of the mobile device;
receiving, from the first sensor, first information that identifies the object external to and independent of contact with the screen;
determining, based at least in part on the first information, that the object comprises at least a portion of a face of a user;
determining a plurality of potential operations of the mobile device based at least in part on the portion of the face of the user being determined from the first information;
receiving, from a second sensor, second information that further identifies the object;
determining, based at least in part on the second information, facial characteristics of the user;
determining an operation of the plurality of potential operations of the mobile device based at least in part on the facial characteristics of the user corresponding to user information; and
performing the determined operation.
1 Assignment
0 Petitions
Accused Products
Abstract
External mobile device sensors may be provided that are configured to manage sensory information associated with motion of objects external to the mobile device. In some examples, the object motion may be detected independent of contact with the device. In some examples, a device may include a screen with a first sensor (e.g., a touch sensor). The device may also include at least a second sensor external to the screen. Instructions may be executed by a processor of the device to at least determine when an object is hovering over a first graphical user interface (GUI) element of the screen. Additionally, in some cases, a second GUI element may be provided on the screen such that the second GUI element is rendered on the screen adjacent to a location under the hovering object.
13 Citations
20 Claims
-
1. A computer-implemented method, comprising:
-
monitoring, by a first sensor of a mobile device, for an object external to and independent of contact with a screen of the mobile device; receiving, from the first sensor, first information that identifies the object external to and independent of contact with the screen; determining, based at least in part on the first information, that the object comprises at least a portion of a face of a user; determining a plurality of potential operations of the mobile device based at least in part on the portion of the face of the user being determined from the first information; receiving, from a second sensor, second information that further identifies the object; determining, based at least in part on the second information, facial characteristics of the user; determining an operation of the plurality of potential operations of the mobile device based at least in part on the facial characteristics of the user corresponding to user information; and performing the determined operation. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A device, comprising:
-
a screen configured to render a user interface; a first sensor configured to sense first information about an object independent of contact between the object and the screen; a second sensor configured to sense second information about characteristics of the object; a memory that stores computer-executable instructions; and a processor configured to access the memory and to execute the computer-executable instructions to collectively at least; receive, from the first sensor, the first information about the object, the first information sensed by the first sensor independent of contact with the screen; determine that the object comprises at least part of a face based at least in part on the first information; activate the second sensor based at least in part on the at least part of the face being determined from the first information; receive, from the second sensor, the second information about the characteristics of the object, the second information indicating a characteristic of the face; and perform an operation of the device based at least in part on the characteristic of the face corresponding to user information. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A system, comprising:
-
a screen; a first sensor configured to sense an object external to the screen; a second sensor configured to sense a characteristic of the object; a memory that stores computer-executable instructions; and a processor configured to access the memory and execute the computer-executable instructions to collectively at least; identify, based at least in part on first information obtained by the first sensor, that the object external to the screen comprises at least part of a face; activate the second sensor based at least in part on the object being identified as comprising the at least part of the face; determine, based at least in part on second information obtained by the second sensor, the characteristic of the object; compare the characteristic to data stored in the memory that identifies a user of the system; and provide access to the system when the characteristic matches the data. - View Dependent Claims (18, 19, 20)
-
Specification