AUTOMATICALLY TRACKING USER MOVEMENT IN A VIDEO CHAT APPLICATION
First Claim
1. A method for automatically tracking movement of a user participating in a video chat application executing in a computing device, the method comprising:
- receiving a capture frame comprising one or more depth images of a capture area from a depth camera connected to a computing device;
determining if the capture frame includes a user in a first location in the capture area;
identifying a sub-frame of pixels in the capture frame, the sub-frame of pixels identifying a position of the head, neck and shoulders of the user in the capture frame;
displaying the sub-frame of pixels to a remote user at a remote computing system;
automatically tracking the position of the head, neck and shoulders of the user to a next location within the capture area;
identifying a next sub-frame of pixels, the next sub-frame of pixels identifying a position of the head, neck and shoulders of the user in the next location, wherein the next sub-frame of pixels is included in a next capture frame of the capture area; and
displaying the next sub-frame of pixels to the remote user in the remote computing system.
2 Assignments
0 Petitions
Accused Products
Abstract
A system for automatically tracking movement of a user participating in a video chat application executing in a computing device is disclosed. A capture device connected to the computing device captures a user in a field of view of the capture device and identifies a sub-frame of pixels identifying a position of the head, neck and shoulders of the user in a capture frame of a capture area. The sub-frame of pixels is displayed to a remote user at a remote computing system who is participating in the video chat application with the user. The capture device automatically tracks the position of the head, neck and shoulders of the user as the user moves to a next location within the capture area. A next sub-frame of pixels identifying a position of the head, neck and shoulders of the user in the next location is identified and displayed to the remote user at the remote computing device.
126 Citations
20 Claims
-
1. A method for automatically tracking movement of a user participating in a video chat application executing in a computing device, the method comprising:
-
receiving a capture frame comprising one or more depth images of a capture area from a depth camera connected to a computing device; determining if the capture frame includes a user in a first location in the capture area; identifying a sub-frame of pixels in the capture frame, the sub-frame of pixels identifying a position of the head, neck and shoulders of the user in the capture frame; displaying the sub-frame of pixels to a remote user at a remote computing system; automatically tracking the position of the head, neck and shoulders of the user to a next location within the capture area; identifying a next sub-frame of pixels, the next sub-frame of pixels identifying a position of the head, neck and shoulders of the user in the next location, wherein the next sub-frame of pixels is included in a next capture frame of the capture area; and displaying the next sub-frame of pixels to the remote user in the remote computing system. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14)
-
-
11. The method of claim 11, further comprising:
displaying the individual sub-frames of pixels to the remote user.
-
15. One or more processor readable storage devices having processor readable code embodied on said one or more processor readable storage devices, the processor readable code for programming one or more processors to perform a method comprising:
-
receiving a capture frame comprising one or more depth images of a capture area from a depth camera connected to a computing device; determining if the capture frame includes a user in a first location in the capture area; identifying a sub-frame of pixels in the capture frame, the sub-frame of pixels identifying a position of the head, neck and shoulders of the user in the capture frame; displaying the sub-frame of pixels to a remote user at a remote computing system; receiving a next capture frame comprising one or more depth images of the capture area from a depth camera; automatically tracking movement of one or more users within the capture area; determining if the next capture frame includes the one or more users in a next location in the capture area based on the tracking; identifying a next sub-frame of pixels containing the head, neck and shoulders of the one or more users in the next capture frame; and displaying the next sub-frame of pixels to the remote user at the remote computing system. - View Dependent Claims (16, 17, 18)
-
-
19. An apparatus for automatically tracking movement of one or more users participating in a video chat application, comprising:
-
a depth camera; and a computing device connected to the depth camera to receive a capture frame comprising one or more depth images of a capture area from a depth camera connected to a computing device;
determine if the capture frame includes more than one user in a first location in the capture area;
identify a sub-frame of pixels in the capture frame, the sub-frame of pixels identifying a position of the head, neck and shoulders of each of the users in the capture frame;
display the sub-frame of pixels to a remote user at a remote computing system;
determine if a voice input is received from at least one user in the first sub-frame of pixels, identify one or more of the users providing the voice input;
automatically adjust the sub-frame of pixels to include the one or more users providing the voice input and display the sub-frame of pixels containing the one or more users providing the voice input to a remote user at a remote computing device. - View Dependent Claims (20)
-
Specification