Three-dimensional gesture controlled avatar configuration interface
First Claim
1. A method for controlling presentation to a user of a primary user experience of a software application, the method comprising:
- displaying a third-person avatar in a 3D virtual scene that defines a user interface for controlling presentation of the primary user experience, the user interface being external to, different from, and provided at different times than the primary user experience;
receiving a depth map from a depth camera imaging a physical space in which the user is located, the depth map including a plurality of pixels, each pixel having a depth value that indicates a relative depth of a surface imaged by that pixel;
deriving from the depth map a virtual skeleton that provides a machine readable representation of the user, the virtual skeleton including a plurality of joints, each joint having a three-dimensional position;
recognizing controlling movements of the user within the physical space via at least the three-dimensional positions of two or more different joints of the virtual skeleton;
causing display of controlled movements of the third-person avatar within the 3D virtual scene so that the controlled movements visually replicate the controlling movements;
detecting that the controlled movements include a predefined interaction of the third-person avatar with a user interface element displayed in the 3D virtual scene, the predefined interaction corresponding to selection of a characteristic to be implemented in connection with delivery of the primary user experience; and
controlling presentation of the primary user experience in response to and based upon detecting the predefined interaction, such that the primary user experience varies—
as a result of the implemented characteristic—
from that which would occur in the event of detecting a different predefined interaction.
2 Assignments
0 Petitions
Accused Products
Abstract
A method for controlling presentation to a user of a primary user experience of a software application is provided. The method includes displaying a third-person avatar in a 3D virtual scene that defines a user interface for controlling presentation of the primary user experience. The method further includes sensing controlling movements of the user within a physical space in which the user is located and causing display of controlled movements of the third-person avatar within the 3D virtual scene so that the controlled movements visually replicate the controlling movements. The method further includes detecting a predefined interaction of the third-person avatar with a user interface element displayed in the 3D virtual scene, and controlling presentation of the primary user experience in response to detecting the predefined interaction.
-
Citations
13 Claims
-
1. A method for controlling presentation to a user of a primary user experience of a software application, the method comprising:
-
displaying a third-person avatar in a 3D virtual scene that defines a user interface for controlling presentation of the primary user experience, the user interface being external to, different from, and provided at different times than the primary user experience; receiving a depth map from a depth camera imaging a physical space in which the user is located, the depth map including a plurality of pixels, each pixel having a depth value that indicates a relative depth of a surface imaged by that pixel; deriving from the depth map a virtual skeleton that provides a machine readable representation of the user, the virtual skeleton including a plurality of joints, each joint having a three-dimensional position; recognizing controlling movements of the user within the physical space via at least the three-dimensional positions of two or more different joints of the virtual skeleton; causing display of controlled movements of the third-person avatar within the 3D virtual scene so that the controlled movements visually replicate the controlling movements; detecting that the controlled movements include a predefined interaction of the third-person avatar with a user interface element displayed in the 3D virtual scene, the predefined interaction corresponding to selection of a characteristic to be implemented in connection with delivery of the primary user experience; and controlling presentation of the primary user experience in response to and based upon detecting the predefined interaction, such that the primary user experience varies—
as a result of the implemented characteristic—
from that which would occur in the event of detecting a different predefined interaction. - View Dependent Claims (2, 3, 4)
-
-
5. A computing system, comprising,
a data-holding subsystem and logic subsystem that are operatively interconnected, the data-holding subsystem containing instructions that are executable by the logic subsystem to: -
cause a display subsystem to display a third-person avatar in a 3D virtual scene that defines a user interface for controlling presentation of a primary user experience of a software application that is executable by the logic subsystem, the user interface being external to, different from, and provided at different times than the primary user experience; in response to a depth camera imaging a user within a physical space to output a depth map that includes a plurality of pixels, each pixel having a depth value that indicates a relative depth of a surface imaged by that pixel, modeling the user with a virtual skeleton derived from the depth map, the virtual skeleton including a plurality of joints, each joint having a three-dimensional position; cause the display subsystem to display controlled movements of the third-person avatar so that the controlled movements visually correspond to virtual movements of the virtual skeleton, the controlling movements being based on at least the three-dimensional positions of two or more different joints of the virtual skeleton; and control presentation of the primary user experience in response to the virtual movements of the virtual skeleton. - View Dependent Claims (6, 7)
-
-
8. A method for controlling a software application that provides a user with a primary user experience, the method comprising:
-
displaying a third-person avatar in a 3D virtual scene that defines a user interface for controlling the primary user experience, the user interface being external to, different from, and provided at different times than the primary user experience; sensing a controlling movement of the user within a physical space in which the user is located, where such sensing is performed optically and in real-time using a depth camera; in response to sensing the controlling movement of the user, causing a controlled movement of the third-person avatar within the 3D virtual scene so that the controlled movement visually corresponds to the controlling movement; determining whether the controlled movement includes a predefined action that selects a virtual object that is displayed within the 3D virtual scene; and if the controlled movement includes the predefined action, controlling the primary user experience to incorporate use of the virtual object in the primary user experience such that the primary user experience varies from that which would occur in the event of the virtual object not being selected, wherein the displaying, sensing, causing of the controlled movement and determining are all performed outside of the primary user experience. - View Dependent Claims (9, 10, 11, 12, 13)
-
Specification