Key input using an active pixel camera
First Claim
Patent Images
1. A method for performing graphical user interface navigating on a user device using an active pixel sensor located on the user device, the method comprising:
- capturing a first image using an active pixel sensor, the active pixel sensor comprising a plurality of photosensors, each photosensor corresponding to a different pixel in a two-dimensional array of pixels;
detecting a low luminance area of the first image, the low luminance area caused by an object blocking light projected by a light source outside of the user device from reaching the active pixel sensor;
capturing a second image using the active pixel sensor;
detecting a movement of the low luminance area between the first image and the second image;
determining a direction based at least in part on the movement of the low luminance area; and
translating the direction into an input for an application running on the user device.
1 Assignment
0 Petitions
Accused Products
Abstract
In an example embodiment, an active pixel sensor on a user device is utilized to capture graphical user interface navigation related movements by a user. Areas of low luminance can be identified and movements or alterations in the areas of low luminance can be translated into navigation commands fed to an application running on the user device.
-
Citations
20 Claims
-
1. A method for performing graphical user interface navigating on a user device using an active pixel sensor located on the user device, the method comprising:
-
capturing a first image using an active pixel sensor, the active pixel sensor comprising a plurality of photosensors, each photosensor corresponding to a different pixel in a two-dimensional array of pixels; detecting a low luminance area of the first image, the low luminance area caused by an object blocking light projected by a light source outside of the user device from reaching the active pixel sensor; capturing a second image using the active pixel sensor; detecting a movement of the low luminance area between the first image and the second image; determining a direction based at least in part on the movement of the low luminance area; and translating the direction into an input for an application running on the user device. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method for performing graphical user interface navigating on a user device using an active pixel sensor located on the user device, comprising:
-
capturing a first image using the active pixel sensor, the active pixel sensor comprising a plurality of photosensors, each photosensor corresponding to a different pixel in a two-dimensional array of pixels; detecting an area of the first image having low luminance, the low luminance area caused by an object blocking light projected by a light source outside of the user device from reaching the active pixel sensor; capturing a second image using the active pixel sensor; capturing a third image using the active pixel sensor; detecting an alteration in the detected area among the first image, the second image, and the third image such that the detected area changes from low luminance in the first image to high luminance in the second image and then back to low luminance in the third image and the first, second, and third images are captured within a predetermined time period of each other; and generating navigation input to an application running on the user device based at least in part on detecting the alteration in the detected area, the navigation input indicating detection of a tapping motion. - View Dependent Claims (8, 9)
-
-
10. A user device comprising:
-
a processor; memory; a touchscreen; an active pixel sensor located opposite the touchscreen and configured to act as a camera while a camera application is executing by the processor, and configured to act as an input device while a non-camera application is executing by the process, the active pixel sensor comprising a plurality of photosensors, each photosensor corresponding to a different pixel in a two-dimensional array of pixels and further configured to capture a first image and a second image; and an active pixel sensory monitor configured to; detect a low luminance area of the first image, the low luminance area caused by an object blocking light projected by a light source outside of the user device from reaching the active pixel sensor; detect movement of the low luminance area between the first image and the second image; determine a direction for the movement of the low luminance area; and translate the direction into navigation input for an application running on the user device. - View Dependent Claims (11, 12, 13, 14, 15)
-
-
16. A user device having a front and a back, comprising:
-
a processor; memory; a touchscreen located on the front of the user device; an active pixel sensor configured to act as a camera while a camera application is executing by the processor, and configured to act as an input device while a non-camera application is executing by the process, the active pixel sensor comprising a plurality of photosensors, each photosensor corresponding to a different pixel in a two-dimensional array of pixels and further configured to capture a first image, a second image, and a third image; and an active pixel sensor monitor configured to; detect an area of the first image having low luminance, the low luminance area caused by an object blocking light projected by a light source outside of the user device from reaching the active pixel sensor; detect an alteration in the detected area among the first image, the second image, and the third image such that the detected area changes from low luminance in the first image to high luminance in the second image and then back to low luminance in the third image, wherein the first image, the second image, and the third image were captured within a predetermined time period of each other; and generate navigation input to an application running on the user device, the navigation input indicating detection of a tapping motion. - View Dependent Claims (17, 18)
-
-
19. A non-transitory machine-readable storage medium including instructions that, when executed by a machine, cause the machine to perform operations comprising:
- selecting a user input mode for data received from an active pixel sensor;
receiving sensor data from the active pixel sensor, the active pixel sensor comprising a plurality of photosensors, each photosensor corresponding to a different pixel in a two-dimensional array of pixels;
parsing, over a period of time, received sensor data into a plurality of input events; and
generating input to an application running on the machine based at least in part on the plurality of input events, wherein the plurality of input events include movement of a low luminance area, the low luminance area caused by an object blocking light projected by a light source outside of the user device from reaching the active pixel sensor. - View Dependent Claims (20)
- selecting a user input mode for data received from an active pixel sensor;
Specification