Touch-free gesture recognition system and method
First Claim
Patent Images
1. A touch-free gesture recognition system, comprising:
- at least one processor configured to;
display a virtual image that includes at least one activatable object, the activatable object having at least two activating modes, wherein a first mode is associated with a first mode of activating object, and a second mode is associated with a second mode of activating object;
correlate a location of an activating object with a location of the activatable object; and
implement a first action when the location of the activating object correlates to the location of the activatable object and the activating object performs a first type of movement, and implement a second action when the location of the activating object correlates to the location of the activatable object and the activating object performs a second type of movement.
0 Assignments
0 Petitions
Accused Products
Abstract
The present invention provides a system and method for interacting with a 3D virtual image containing activatable objects. The system of the invention includes a 3D display device that presents to a user a 3D image, an image sensor and a processor. The processor analyzes images obtained by the image sensor to determine when the user has placed an activating object such as a hand or a finger, or has performed a gesture related to an activatable object, in the 3D space, at the location where the user perceives an activatable object to be located. The user thus perceives that he is “touching” the activatable object with the activating object.
23 Citations
20 Claims
-
1. A touch-free gesture recognition system, comprising:
-
at least one processor configured to; display a virtual image that includes at least one activatable object, the activatable object having at least two activating modes, wherein a first mode is associated with a first mode of activating object, and a second mode is associated with a second mode of activating object; correlate a location of an activating object with a location of the activatable object; and implement a first action when the location of the activating object correlates to the location of the activatable object and the activating object performs a first type of movement, and implement a second action when the location of the activating object correlates to the location of the activatable object and the activating object performs a second type of movement. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A non-transitory computer-readable medium including instructions that, when executed by at least one processor, cause the processor to perform operations, comprising:
-
displaying a virtual image that includes at least one activatable object, the activatable object having at least two activating modes simultaneously activatable, wherein a first mode is associated with a first mode of activating object and a second mode is associated with a second mode of activating object; correlating a location of the activating object with a location of the activatable object; and implementing a first action when the location of the activating object correlates to the activatable object and the activating object performs a first type of movement, and implementing a second action when the location of the activating object correlates to the activatable object and the activating object performs a second type of movement. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
-
20. A touch-free gesture recognition method, comprising:
-
displaying a virtual image that includes at least one activatable object, the activatable object having at least two activating modes simultaneously, wherein a first mode is associated with a first mode of activating object, and a second mode is associated with a second mode of activating object; correlating a location of the activating object with a location of the activatable object; and implementing a first action when the location of the activating object correlates to the activatable object and the activating object performs a first type of movement, and implementing a second action when the location of the activating object correlates to the activatable object and the activating object performs a second type of movement.
-
Specification