Interacting with user interface via avatar
First Claim
1. In a computing device, a method of presenting a user interface, the method comprising:
- receiving depth data from a depth-sensing camera;
locating a plurality of persons in the depth data;
determining a selected person of the plurality of persons to be in control of the user interface relative to one or more other persons of the plurality of persons from one or more of a posture and a gesture of the selected person in the depth data relative to the one or more other persons of the plurality of persons, the one or more of the posture and the gesture comprising one or more characteristics indicative of an intent of the selected person to assume control of the user interface;
mapping a physical space in front of the selected person to a screen space of a display device;
forming an image of an avatar representing the selected person;
outputting to the display device an image of a user interface, the user interface comprising an interactive user interface control;
outputting to the display device the image of the avatar such that the avatar appears to face the user interface control;
detecting a motion of the selected person via the depth data;
forming an animated representation of the avatar interacting with the user interface control based upon the motion of the selected person;
biasing a movement of the animated representation of the avatar in a direction toward the user interface control compared to the motion of the selected person as the avatar becomes closer to the user interface control; and
outputting to the display device the animated representation of the avatar interacting with the user interface control.
2 Assignments
0 Petitions
Accused Products
Abstract
Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.
202 Citations
20 Claims
-
1. In a computing device, a method of presenting a user interface, the method comprising:
-
receiving depth data from a depth-sensing camera; locating a plurality of persons in the depth data; determining a selected person of the plurality of persons to be in control of the user interface relative to one or more other persons of the plurality of persons from one or more of a posture and a gesture of the selected person in the depth data relative to the one or more other persons of the plurality of persons, the one or more of the posture and the gesture comprising one or more characteristics indicative of an intent of the selected person to assume control of the user interface; mapping a physical space in front of the selected person to a screen space of a display device; forming an image of an avatar representing the selected person; outputting to the display device an image of a user interface, the user interface comprising an interactive user interface control; outputting to the display device the image of the avatar such that the avatar appears to face the user interface control; detecting a motion of the selected person via the depth data; forming an animated representation of the avatar interacting with the user interface control based upon the motion of the selected person; biasing a movement of the animated representation of the avatar in a direction toward the user interface control compared to the motion of the selected person as the avatar becomes closer to the user interface control; and outputting to the display device the animated representation of the avatar interacting with the user interface control. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A computing device, comprising:
-
a processor; and memory comprising instructions executable by the processor to; receive depth data from a depth-sensing camera; locate a person in the depth data; map a physical space in front of the person to a screen space of a display device to form a mapping of the physical space in front of the person; form an image of an avatar representing the person; output to the display device an image of a user interface, the user interface comprising an interactive user interface control; output to the display device the image of the avatar such that the avatar appears to face outwardly from the display device toward the person; detect a motion of a limb of the person via the depth data; form an animated representation of a limb of the avatar moving based upon the motion of the limb of the person and the mapping of the physical space in front of the person; form an animated representation of the limb of the avatar interacting with the user interface control; bias a movement of the animated representation of the limb of the avatar in a direction toward the user interface control and away from a motion of the limb of the person as the limb of the avatar becomes closer to the user interface control; and output to the display device the animated representation of the avatar interacting with the user interface control, the animated representation comprising a hand of the avatar closing over the user interface control to form a representation of a hand that is closed over the user interface control and grips the user interface control. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A computer-readable storage device comprising instructions stored thereon that are executable by a computing device to:
-
receive depth data from a depth-sensing camera; locate a plurality of persons in the depth data; determine a selected person of the plurality of persons to be in control of a user interface; map a physical space in front of the selected person to a space on a screen of a display device to form a mapping of the physical space in front of the selected person; form an image of a selected avatar representing the selected person and an image of an additional avatar representing a different person of the plurality of persons; output to the display device an image of a user interface comprising an interactive user interface control; output to the display device the image of the selected avatar such that the selected avatar appears to face outwardly from the display device toward the selected person, the selected avatar being displayed with an interactive pose indicating an intent of the user to provide control input to the user interface; output to the display device the image of the additional avatar representing the different person, the additional avatar being displayed with a non-interactive pose; detect a motion of a limb of the selected person via the depth data; form an animated representation of an arm of the avatar moving based upon the motion of the limb of the selected person and the mapping of the physical space in front of the selected person; as the limb of the avatar moves closer toward the user interface control, bias a movement of the limb of the avatar in a direction toward the user interface control compared to the motion of the limb of the selected person; and output to the display device an animated representation of a hand of the avatar interacting with the user interface control, the animated representation comprising the hand of the avatar closing over the user interface control to form a representation of a hand that is closed over the user interface control and grips the user interface control. - View Dependent Claims (17, 18, 19, 20)
-
Specification