Position capture input apparatus, system, and method therefor
First Claim
1. A position capture input system comprising:
- a headworn device supportable on a head of a human user;
a first camera located in the headworn device pointing forward and configured to capture an image of a field of view from a perspective of the user, wherein the first camera is viewing at least part of what the user is viewing; and
a computing device in communication with the first camera and configured to;
perform a calibration process comprising displaying a calibration screen and receiving, from the first camera, a captured image of the calibration screen in the field of view from the perspective of the user, wherein the calibration process is performed on an ongoing basis to accommodate for the user changing perspective or moving;
display a graphical user interface, wherein the graphical user interface comprises a plurality of user interface elements;
receive, from the first camera, a captured image of the graphical user interface in the field of view from the perspective of the user;
compare the captured image of the graphical user interface to an image of the graphical user interface to detect an obstruction in the captured image of the graphical user interface;
map a position of the detected obstruction to a location in the graphical user interface using results obtained from the calibration process; and
determine whether a user is interacting with one of the plurality of user interface elements of the graphical user interface based on the mapped position of the obstruction to the location in the graphical user interface.
0 Assignments
0 Petitions
Accused Products
Abstract
According to various embodiments, a position capture input system uses a camera to capture an image of a displayed graphical user interface that may be partially obstructed by an object, such as a user'"'"'s hand or other body part. The position capture input system also includes a software component that causes a computing device to compare the captured image with a displayed image to determine which portion, if any, of the graphical user interface is obstructed. The computing device can then identify any user interface elements with which the user is attempting to interact. The position capture input system may also include an accelerometer or accelerometers for detecting gestures performed by the user to, for example, select or otherwise interact with a user interface element. The position capture input system may also include a haptic feedback module to provide confirmation, for example, that a user interface element has been selected.
-
Citations
21 Claims
-
1. A position capture input system comprising:
-
a headworn device supportable on a head of a human user; a first camera located in the headworn device pointing forward and configured to capture an image of a field of view from a perspective of the user, wherein the first camera is viewing at least part of what the user is viewing; and a computing device in communication with the first camera and configured to; perform a calibration process comprising displaying a calibration screen and receiving, from the first camera, a captured image of the calibration screen in the field of view from the perspective of the user, wherein the calibration process is performed on an ongoing basis to accommodate for the user changing perspective or moving; display a graphical user interface, wherein the graphical user interface comprises a plurality of user interface elements; receive, from the first camera, a captured image of the graphical user interface in the field of view from the perspective of the user; compare the captured image of the graphical user interface to an image of the graphical user interface to detect an obstruction in the captured image of the graphical user interface; map a position of the detected obstruction to a location in the graphical user interface using results obtained from the calibration process; and determine whether a user is interacting with one of the plurality of user interface elements of the graphical user interface based on the mapped position of the obstruction to the location in the graphical user interface. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method of processing user input received in a computing device, the method comprising:
in the computing device; receiving captured images from a camera located in a headworn device supportable on a head of a human user, wherein the camera points forward and is configured to capture an image of a field of view from a perspective of the user and wherein the camera is viewing at least part of what the user is viewing; performing a calibration process comprising displaying a calibration screen and receiving, from the camera, a captured image of the calibration screen in the field of view from the perspective of the user, wherein the calibration process is performed on an ongoing basis to accommodate for the user changing perspective or moving; displaying a graphical user interface, wherein the graphical user interface comprises a plurality of user interface elements; receiving, from the camera, a captured image of the graphical user interface in the field of view from the perspective of the user; comparing the captured image of the graphical user interface to an image of the graphical user interface to detect an obstruction in the captured image of the graphical user interface; mapping a position of the detected obstruction to a location in the graphical user interface using results obtained from the calibration process; and determining whether the user is interacting with one of the plurality of user interface elements of the graphical user interface based on the mapped position of the obstruction to the location in the graphical user interface. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15, 16)
-
17. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a computing device, causes the computing device to perform a method comprising:
-
receiving captured images from a camera located in a headworn device supportable on a head of a human user, wherein the camera points forward and is configured to capture an image of a field of view from a perspective of the user and wherein the camera is viewing at least part of what the user is viewing; performing a calibration process comprising displaying a calibration screen and receiving, from the camera, a captured image of the calibration screen in the field of view from the perspective of the user, wherein the calibration process is performed on an ongoing basis to accommodate for the user changing perspective or moving; displaying a graphical user interface, wherein the graphical user interface comprises a plurality of user interface elements; receiving, from the camera, a captured image of the graphical user interface in the field of view from the perspective of the user; comparing the captured image of the graphical user interface to an image of the graphical user interface to detect an obstruction in the captured image of the graphical user interface; mapping a position of the detected obstruction to a location in the graphical user interface using results obtained from the calibration process; and determining whether the user is interacting with one of the plurality of user interface elements of the graphical user interface based on the mapped position of the obstruction to the location in the graphical user interface. - View Dependent Claims (18, 19, 20, 21)
-
Specification