Processing of gesture-based user interactions activation levels
First Claim
Patent Images
1. A method comprising:
- receiving information corresponding to a physical space from one or more cameras;
mapping a position of a user in the physical space to a position in a virtual space based at least in part on the received information;
determining an activation level of the user based on a distance from a position of a hand of the user in the physical space to a position of another portion of the user in the physical space, wherein the distance associated with the activation level is increased or decreased using;
a proximity of the position of the hand of the user to a physical space relative to a virtual object during a predetermined time period, anda velocity of the hand of the user along one or more axes during the predetermined time period;
selecting one of a plurality of visually perceptible indicators based on a comparison of the activation level and activation thresholds associated with respective visually perceptible indicators; and
generating an image comprising;
the virtual object; and
the selected one of the plurality of visually perceptible indicators.
8 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for processing gesture-based user interactions within an interactive display area are provided. The display of one or more virtual objects and user interactions with the one or more virtual objects may be further provided. Multiple interactive areas may be created by partitioning an area proximate a display into multiple volumetric spaces or zones. The zones may be associated with respective user interaction capabilities. A representation of a user on the display may change as the ability of the user to interact with one or more virtual object changes.
-
Citations
17 Claims
-
1. A method comprising:
-
receiving information corresponding to a physical space from one or more cameras; mapping a position of a user in the physical space to a position in a virtual space based at least in part on the received information; determining an activation level of the user based on a distance from a position of a hand of the user in the physical space to a position of another portion of the user in the physical space, wherein the distance associated with the activation level is increased or decreased using; a proximity of the position of the hand of the user to a physical space relative to a virtual object during a predetermined time period, and a velocity of the hand of the user along one or more axes during the predetermined time period; selecting one of a plurality of visually perceptible indicators based on a comparison of the activation level and activation thresholds associated with respective visually perceptible indicators; and generating an image comprising; the virtual object; and the selected one of the plurality of visually perceptible indicators. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A non-transitory computer readable storage medium having software instructions stored thereon that, in response to execution by a computing device, cause the computing device to perform operations comprising:
-
receiving information corresponding to a three-dimensional space from one or more cameras; determining a first position of a hand of a user in the three-dimensional space based at least in part on the received information; determining a second position of a portion of the user that is not the hand of the user in the three-dimensional space based at least in part on the received information; determining an activation level of the hand of the user based on a distance between the first position and the second position, wherein the distance associated with the activation level is increased or decreased using; a proximity of the position of the hand of the user to a physical space relative to a virtual object during a predetermined time period, and a velocity of the hand of the user along one or more axes during the predetermined time period; selecting one of a plurality of visually perceptible indicators based on the activation level and activation thresholds associated with respective visually perceptible indicators; and generating an image of a virtual space comprising; the virtual object; and the selected one of the plurality of visually perceptible indicators. - View Dependent Claims (8, 9, 10, 11)
-
-
12. A system, comprising:
-
one or more cameras; and one or more hardware processors configured to execute instructions in order to cause the system to; receive information corresponding to a physical space from the one or more cameras; determine a first position of a hand of a user in the physical space based at least in part on the received information; determine a second position of a second body part of the user different from the hand in the physical space based at least in part on the received information; determine an activation level for an interaction based on a distance between the first position and the second position, wherein the distance associated with the activation level is increased or decreased using; a proximity of the position of the hand of the user to a physical space relative to a virtual object during a predetermined time period, and a velocity of the hand of the user along one or more axes during the predetermined time period; select one of a plurality of visual indications based on the activation level and activation thresholds associated with respective visual indications; and generate an image of a virtual space comprising; the virtual object; and the selected one of the plurality of visual indications. - View Dependent Claims (13, 14, 15, 16, 17)
-
Specification