Method and system enabling natural user interface gestures with user wearable glasses
First Claim
1. A method to enable an unadorned user-object to communicate using gestures made in (x,y,z) space with an eye glasses wearable electronic device coupleable to a display having a display screen whereon user viewable imagery is displayable, the eye glasses having an optical acquisition system operable to capture image data, the method comprising:
- (a) capturing image data of said unadorned user-object within a three-dimensional hover zone;
(b) defining within said three-dimensional hover zone an interaction subzone including at least one z0 plane disposed intermediate said eye glasses wearable electronic device and a plane at a maximum z-distance beyond which unadorned user-object gestures need not be recognized by said electronic device;
(c) processing image data captured at (a) representing an interaction of said unadorned user-object with at least a portion of said interaction subzone, defined in (b), to produce three-dimensional positional information of a detected said interaction, the processing including transforming the image data from coordinates corresponding to the optical acquisition system to world coordinates;
(d) using said three-dimensional positional information produced at (c) to determine at least one of (i) when in time, and (ii) where in (x,y,z) space said unadorned user-object interaction occurred;
(e) following determination at (d), identifying a gesture being made by said unadorned user-object; and
(f) in response to identification of a gesture at (e), generating and coupling at least one command to said display, said command having at least one characteristic selected from a group consisting of (I) said command causes altering at least one aspect of said viewable imagery, and (II) said command causes alteration of a state of said display regardless of whether an altered said state is user viewable.
3 Assignments
0 Petitions
Accused Products
Abstract
User wearable eye glasses include a pair of two-dimensional cameras that optically acquire information for user gestures made with an unadorned user object in an interaction zone responsive to viewing displayed imagery, with which the user can interact. Glasses systems intelligently signal process and map acquired optical information to rapidly ascertain a sparse (x,y,z) set of locations adequate to identify user gestures. The displayed imagery can be created by glasses systems and presented with a virtual on-glasses display, or can be created and/or viewed off-glasses. In some embodiments the user can see local views directly, but augmented with imagery showing internet provided tags identifying and/or providing information as to viewed objects. On-glasses systems can communicate wirelessly with cloud servers and with off-glasses systems that the user can carry in a pocket or purse.
-
Citations
20 Claims
-
1. A method to enable an unadorned user-object to communicate using gestures made in (x,y,z) space with an eye glasses wearable electronic device coupleable to a display having a display screen whereon user viewable imagery is displayable, the eye glasses having an optical acquisition system operable to capture image data, the method comprising:
-
(a) capturing image data of said unadorned user-object within a three-dimensional hover zone; (b) defining within said three-dimensional hover zone an interaction subzone including at least one z0 plane disposed intermediate said eye glasses wearable electronic device and a plane at a maximum z-distance beyond which unadorned user-object gestures need not be recognized by said electronic device; (c) processing image data captured at (a) representing an interaction of said unadorned user-object with at least a portion of said interaction subzone, defined in (b), to produce three-dimensional positional information of a detected said interaction, the processing including transforming the image data from coordinates corresponding to the optical acquisition system to world coordinates; (d) using said three-dimensional positional information produced at (c) to determine at least one of (i) when in time, and (ii) where in (x,y,z) space said unadorned user-object interaction occurred; (e) following determination at (d), identifying a gesture being made by said unadorned user-object; and (f) in response to identification of a gesture at (e), generating and coupling at least one command to said display, said command having at least one characteristic selected from a group consisting of (I) said command causes altering at least one aspect of said viewable imagery, and (II) said command causes alteration of a state of said display regardless of whether an altered said state is user viewable. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. An eye glasses wearable system enabling an unadorned user-object to communicate using gestures made in (x,y,z) space with an eye glasses wearable electronic device coupleable to a display having a display screen whereon user viewable imagery is displayable, the system including:
-
an eye glasses system including an optical acquisition system operable to capture image data of said unadorned user-object within a three-dimensional hover zone, said optical acquisition system including at least two two-dimensional cameras; a processor; and memory storing instructions that, when executed by the processor, cause the processor to; define within said three-dimensional hover zone an interaction subzone including at least one z0 plane disposed intermediate a plane of said display screen and a plane at a maximum z-distance beyond which unadorned user-object gestures need not be recognized by said electronic device; process captured image data representing an interaction of said unadorned user-object with at least a portion of said interaction subzone to produce three-dimensional positional information of a detected said interaction, the processor further transforming the image data from coordinates corresponding to the optical acquisition system to world coordinates; determine at least one of (i) when in time, and (ii) where in (x,y,z) space said unadorned user-object first interaction occurred based on the three-dimensional positional information; identify a gesture being made by said unadorned user-object; and generate and couple at least one command to said display, said command having at least one characteristic selected from a group consisting of (I) said command causes altering at least one aspect of said viewable imagery, and (II) said command causes alteration of a state of said display regardless of whether an altered said state is user viewable. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification