Method for interacting with an object displayed on data eyeglasses
First Claim
1. A method for interacting with a first object being displayed to a user via smart glasses, the smart glasses comprising a display, the method comprising acts of:
- displaying the first object and a second object in a first virtual image for the user using the display of the smart glasses;
detecting that the user closes a first eye and keeps the first eye closed during a predetermined period of time using a first camera configured to record the first eye of the user;
recording a first hand of the user using a second camera;
determining a hand-eye-object vector associated with the second object;
determining that at least a portion of the second object is covering at least a portion of the first hand based at least on the determined hand-eye object vector and displaying the first and second objects in a second virtual image such that the portion of the second object covering the first hand is not displayed;
determining that the user performs a first input action for the first object in the second virtual image during the predetermined period of time, wherein the first input action includes the first hand of the user assuming an attitude, and a position from a perspective of a second eye of the user with respect to the first object, that meet a predetermined condition; and
carrying out a first action with respect to the first object, wherein the first action is associated with the first input action in advance, and wherein the hand-eye-object vector indicates a positional relationship among a second eye of the user, the first hand, and the second object.
1 Assignment
0 Petitions
Accused Products
Abstract
The invention relates to a method for interacting with an object that is displayed to a user by smart glasses, which includes a display. The method includes displaying the object for the user using the display, detecting that the user closes a first eye and keeps the first eye closed during a predetermined period of time using a first camera, recording a hand of the user using a second camera determining that the user performs an input action during the predetermined period of time, wherein the input action includes the hand assuming an attitude, and a position from a perspective of a second eye of the user with respect to the object, that meet a predetermined condition, and performing an action with respect to the object, wherein the action is associated with the input action in advance.
45 Citations
15 Claims
-
1. A method for interacting with a first object being displayed to a user via smart glasses, the smart glasses comprising a display, the method comprising acts of:
-
displaying the first object and a second object in a first virtual image for the user using the display of the smart glasses; detecting that the user closes a first eye and keeps the first eye closed during a predetermined period of time using a first camera configured to record the first eye of the user;
recording a first hand of the user using a second camera;determining a hand-eye-object vector associated with the second object; determining that at least a portion of the second object is covering at least a portion of the first hand based at least on the determined hand-eye object vector and displaying the first and second objects in a second virtual image such that the portion of the second object covering the first hand is not displayed; determining that the user performs a first input action for the first object in the second virtual image during the predetermined period of time, wherein the first input action includes the first hand of the user assuming an attitude, and a position from a perspective of a second eye of the user with respect to the first object, that meet a predetermined condition; and carrying out a first action with respect to the first object, wherein the first action is associated with the first input action in advance, and wherein the hand-eye-object vector indicates a positional relationship among a second eye of the user, the first hand, and the second object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 12, 13)
-
-
10. A device comprising:
-
a display; a first camera; a second camera; and at least one processor executing stored program instructions to; display a first object and a second object in a first virtual image for a user using the display; detect that the user closes a first eye and keeps the first eye closed during a predetermined period of time using the first camera configured to record the first eye of the user; record a hand of the user using the second camera; determine a hand-eye-object vector associated with the second object; determine that at least a portion of the second object is covering at least a portion of the first hand based at least on the determined hand-eye object vector and displaying the first and second objects in a second virtual image such that the portion of the second object covering the first hand is not displayed; determine that the user performs an input action for the first object in the second virtual image during the predetermined period of time, wherein the input action includes the hand of the user assuming an attitude, and a position from a perspective of a second eye of the user with respect to the first object, that meet a predetermined condition; and carry out an action with respect to the first object, wherein the action is associated with the input action in advance, and wherein the hand-eye-object vector indicates a positional relationship among a second eye of the user, the first hand, and the second object. - View Dependent Claims (14)
-
-
11. A non-transitory computer readable medium storing program instructions, the program instructions when executed by at least one processor performs a method comprising acts of:
-
displaying a first object and a second object in a first virtual image for a user using a display detecting that the user closes a first eye and keeps the first eye closed during a predetermined period of time using a first camera configured to record the first eye of the user; record a hand of the user using a second camera; determining a hand-eye-object vector associated with the second object; determining that at least a portion of the second object is covering at least a portion of the first hand based at least on the determined hand-eye object vector and displaying the first and second objects in a second virtual image such that the portion of the second object covering the first hand is not displayed; determining that the user performs a first input action for the first object in the second virtual image during the predetermined period of time, wherein the first input action includes the first hand of the user assuming an attitude, and a position from a perspective of a second eye of the user with respect to the first object, that meet a predetermined condition; and carrying out a first action with respect to the first object, wherein the first action is associated with the first input action in advance, and wherein the hand-eye-object vector indicates a positional relationship among a second eye of the user, the first hand, and the second object. - View Dependent Claims (15)
-
Specification