Interacting with user interface via avatar
First Claim
1. In a computing device, a method of presenting a user interface, the method comprising:
- receiving depth data from a depth-sensing camera;
locating a plurality of persons in the depth data;
determining a selected person of the plurality of persons to be in control of the user interface relative to one or more other persons of the plurality of persons from one or more of a posture and a gesture of the selected person in the depth data relative to the one or more other persons of the plurality of persons, the one or more of the posture and the gesture comprising one or more characteristics indicative of an intent of the selected person to assume control of the user interface;
forming an image of an avatar representing the selected person;
outputting to the display device an image of a user interface, the user interface comprising an interactive user interface control;
outputting to the display device the image of the avatar such that the avatar appears to face the user interface control;
detecting a motion of the selected person via the depth data; and
outputting to the display device an animated representation of the avatar interacting with the user interface control based upon the motion of the selected person.
3 Assignments
0 Petitions
Accused Products
Abstract
Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.
185 Citations
20 Claims
-
1. In a computing device, a method of presenting a user interface, the method comprising:
-
receiving depth data from a depth-sensing camera; locating a plurality of persons in the depth data; determining a selected person of the plurality of persons to be in control of the user interface relative to one or more other persons of the plurality of persons from one or more of a posture and a gesture of the selected person in the depth data relative to the one or more other persons of the plurality of persons, the one or more of the posture and the gesture comprising one or more characteristics indicative of an intent of the selected person to assume control of the user interface; forming an image of an avatar representing the selected person; outputting to the display device an image of a user interface, the user interface comprising an interactive user interface control; outputting to the display device the image of the avatar such that the avatar appears to face the user interface control; detecting a motion of the selected person via the depth data; and outputting to the display device an animated representation of the avatar interacting with the user interface control based upon the motion of the selected person. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computing device, comprising:
-
a processor; and memory comprising instructions executable by the processor to; receive depth data from a depth-sensing camera; locate a plurality of persons in the depth data; determine a selected person of the plurality of persons to be in control of the user interface relative to one or more other persons of the plurality of persons from one or more of a posture and a gesture of the selected person in the depth data relative to the one or more other persons of the plurality of persons, the one or more of the posture and the gesture comprising one or more characteristics indicative of an intent of the selected person to assume control of the user interface; form an image of an avatar representing the selected person; output to the display device an image of a user interface, the user interface comprising an interactive user interface control; output to the display device the image of the avatar such that the avatar appears to face the user interface control; detect a motion of the selected person via the depth data; and output to the display device an animated representation of the avatar interacting with the user interface control based upon the motion of the selected person. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A storage device, comprising instructions executable by a computing device to:
-
receive depth data from a depth-sensing camera; locate a plurality of persons in the depth data; determine a selected person of the plurality of persons to be in control of the user interface relative to one or more other persons of the plurality of persons from one or more of a posture and a gesture of the selected person in the depth data relative to the one or more other persons of the plurality of persons, the one or more of the posture and the gesture comprising one or more characteristics indicative of an intent of the selected person to assume control of the user interface; form an image of an avatar representing the selected person; output to the display device an image of a user interface, the user interface comprising an interactive user interface control; output to the display device the image of the avatar such that the avatar appears to face the user interface control; detect a motion of the selected person via the depth data; and output to the display device an animated representation of the avatar interacting with the user interface control based upon the motion of the selected person. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification