Telepresence of multiple users in interactive virtual space
First Claim
1. A method for telepresence communication, the method comprising:
- receiving a first visual dataset corresponding to a three-dimensional shape of a first user and captured by a first capture device, wherein the first visual dataset specifies a plurality of key points each having a respective depth on the body three-dimensional shape of the first user;
generating a skeleton based on the first visual dataset captured by the first capture device, wherein generating the skeleton includes extracting a set of the key points from the first visual dataset, the generated skeleton representing the extracted set of key points;
generating a first three-dimensional wireframe model that recreates the three-dimensional shape of the first user as a plurality of planar surfaces around the generated skeleton by connecting one or more sets of the key points each having the respective depth specified by the first visual dataset, wherein the first three-dimensional wireframe model includes points that match the extracted set of key points represented by the skeleton;
generating a first three-dimensional avatar by applying a first surface texture to at least one of the planar surfaces of the first three-dimensional wireframe model;
receiving a second three-dimensional avatar representative of a second user, wherein the second three-dimensional avatar comprises a second surface texture applied to at least one planar surface of a second three-dimensional wireframe model;
rendering a three-dimensional virtual scene that includes the first three-dimensional avatar and the second three-dimensional avatar;
identifying a first movement made by the first user based on the first visual dataset captured by the first capture device including movement data, wherein the movement data describes the first movement as performed by the skeleton;
generating a first three-dimensional movement representation to be performed by the first three-dimensional avatar in accordance with the movement data describing the first movement by the skeleton;
generating a second three-dimensional movement representation of a second movement indicated by a change to a depth of at least one key point of a second visual dataset corresponding to the second three-dimensional avatar; and
rendering the first three-dimensional avatar performing the first three-dimensional movement representation and the second three-dimensional avatar performing the second three-dimensional movement representation within the three-dimensional virtual scene.
2 Assignments
0 Petitions
Accused Products
Abstract
A telepresence communication uses information captured by a first capture device about a first user and information captured by a second capture device about a second user to generate a first avatar corresponding to the first user and a second avatar corresponding to the second user. A scene can be rendered locally or by a remote server in which the first avatar and the second avatar are both rendered in a virtual space. The first avatar is rendered to move based on movements made by the first user as captured by the first capture device, and the second avatar is rendered to move based on movements made by the second user as captured by the second capture device. The avatars may be realistic, based on avatar templates, or some combination thereof. The rendered scene may include virtual interactive objects that the avatars can interact with.
9 Citations
20 Claims
-
1. A method for telepresence communication, the method comprising:
-
receiving a first visual dataset corresponding to a three-dimensional shape of a first user and captured by a first capture device, wherein the first visual dataset specifies a plurality of key points each having a respective depth on the body three-dimensional shape of the first user; generating a skeleton based on the first visual dataset captured by the first capture device, wherein generating the skeleton includes extracting a set of the key points from the first visual dataset, the generated skeleton representing the extracted set of key points; generating a first three-dimensional wireframe model that recreates the three-dimensional shape of the first user as a plurality of planar surfaces around the generated skeleton by connecting one or more sets of the key points each having the respective depth specified by the first visual dataset, wherein the first three-dimensional wireframe model includes points that match the extracted set of key points represented by the skeleton; generating a first three-dimensional avatar by applying a first surface texture to at least one of the planar surfaces of the first three-dimensional wireframe model; receiving a second three-dimensional avatar representative of a second user, wherein the second three-dimensional avatar comprises a second surface texture applied to at least one planar surface of a second three-dimensional wireframe model; rendering a three-dimensional virtual scene that includes the first three-dimensional avatar and the second three-dimensional avatar; identifying a first movement made by the first user based on the first visual dataset captured by the first capture device including movement data, wherein the movement data describes the first movement as performed by the skeleton; generating a first three-dimensional movement representation to be performed by the first three-dimensional avatar in accordance with the movement data describing the first movement by the skeleton; generating a second three-dimensional movement representation of a second movement indicated by a change to a depth of at least one key point of a second visual dataset corresponding to the second three-dimensional avatar; and rendering the first three-dimensional avatar performing the first three-dimensional movement representation and the second three-dimensional avatar performing the second three-dimensional movement representation within the three-dimensional virtual scene. - View Dependent Claims (2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
6. The method of 5, further comprising continuing to adjust the point of view of the display of the rendered three-dimensional virtual scene as the position of the first user moves within the space.
-
19. A system for telepresence communication, the system comprising:
-
a first capture device that captures a first visual dataset corresponding to a three-dimensional shape of a first user, wherein the first visual dataset specifies a plurality of key points each having a respective depth on the three-dimensional shape of the first user; a communication transceiver that receives a second three-dimensional avatar representative of a second user, wherein the second three-dimensional avatar comprises a second surface texture applied to at least one planar surface of a second three-dimensional wireframe model; a memory that stores instructions; a processor coupled to the memory, wherein execution of the instructions by the processor causes the processor to; generate a skeleton based on the first visual dataset captured by the first capture device, wherein generating the skeleton includes extracting a set of the key points from the first visual dataset, the generated skeleton representing the extracted set of key points, generate a first three-dimensional wireframe model that recreates the three-dimensional shape of the first user as a plurality of planar surfaces around the generated skeleton by connecting one or more sets of the key points each having the respective depth specified by the first visual dataset, wherein the first three-dimensional wireframe model includes points that match the extracted set of key points represented by the skeleton, generate a first three-dimensional avatar by applying a first surface texture to at least one of the planar surfaces of the first three-dimensional wireframe model, render a three-dimensional virtual scene that includes the first three-dimensional avatar and the second three-dimensional avatar, identify a first movement made by the first user based on the first visual dataset captured by the first capture device including movement data, wherein the movement data describes the first movement as performed by the skeleton; generate a first three-dimensional movement representation to be performed by the three-dimensional avatar in accordance with the movement data describing the first movement by the skeleton, generate a second three-dimensional movement representation of a second movement indicated by a change to a depth of at least one key point of a second visual dataset corresponding to the second three-dimensional avatar, and render the first three-dimensional avatar performing the first three-dimensional movement representation and the second three-dimensional avatar performing the second three-dimensional movement representation within the three-dimensional virtual scene. - View Dependent Claims (20)
-
Specification