Recognizing user intent in motion capture system
First Claim
1. A motion capture system, comprising:
- a depth camera system having a field of view;
a display; and
at least one processor in communication with the depth camera system and the display, the at least one processor executes instructions to implement an application in the motion capture system, and provide a signal to the display to display images;
wherein;
the depth camera system and at least one processor, to track a first person and a second person in the field of view, distinguish the first person'"'"'s body and the second person'"'"'s body in the field of view, the second person engages with the application by controlling the second person'"'"'s body to control an avatar in a virtual space on the display, the second person is bound to the avatar, while the first person is not recognized as having an intent to engage with the application; and
the at least one processor, based on the tracking, allows the first person to engage with the application when the at least one processor determines that the first person has an intent to engage with the application, based on a location of the first person'"'"'s body relative to a location of the second person'"'"'s body in the field of view.
2 Assignments
0 Petitions
Accused Products
Abstract
Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person'"'"'s intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one'"'"'s arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.
-
Citations
12 Claims
-
1. A motion capture system, comprising:
-
a depth camera system having a field of view; a display; and at least one processor in communication with the depth camera system and the display, the at least one processor executes instructions to implement an application in the motion capture system, and provide a signal to the display to display images;
wherein;the depth camera system and at least one processor, to track a first person and a second person in the field of view, distinguish the first person'"'"'s body and the second person'"'"'s body in the field of view, the second person engages with the application by controlling the second person'"'"'s body to control an avatar in a virtual space on the display, the second person is bound to the avatar, while the first person is not recognized as having an intent to engage with the application; and the at least one processor, based on the tracking, allows the first person to engage with the application when the at least one processor determines that the first person has an intent to engage with the application, based on a location of the first person'"'"'s body relative to a location of the second person'"'"'s body in the field of view. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. Tangible computer readable storage having computer readable software embodied thereon for programming at least one processor to perform a method in a motion capture system, the method comprising:
-
receiving images associated with a scene that includes a first person'"'"'s body in a field of view of the motion capture system; based on the images;
distinguishing the first person'"'"'s body in the field of view, the first person interacts with an application by movement of the first person'"'"'s body to control an avatar in a virtual space on a display, identifying the movement of the first person'"'"'s body in the scene and distinguishing at least one additional person in the field of view, the at least one additional person does not control an avatar in the virtual space on the display; andwhen a predefined criterion regarding a behavior of the at least one additional person is met, modifying at least one of a visual and audible output of the application. - View Dependent Claims (8, 9)
-
-
10. A processor-implemented method in a motion capture system, comprising the processor-implemented steps of:
-
receiving images associated with a scene that includes a first person'"'"'s body in a field of view of the motion capture system; based on the images;
distinguishing the first person'"'"'s body in the field of view, the first person interacts with an application by movement of the first person'"'"'s body to control an avatar in a virtual space on a display, identifying the movement of the first person'"'"'s body in the scene and distinguishing at least one additional person in the field of view, the at least one additional person does not control an avatar in the virtual space on the display; andwhen a predefined criterion regarding a behavior of the at least one additional person is met, modifying at least one of a visual and audible output of the application. - View Dependent Claims (11, 12)
-
Specification