System and method for generating a virtual reality scene based on individual asynchronous motion capture recordings
First Claim
1. A system configured to generate and/or modify three-dimensional scenes comprising animated characters based on individual asynchronous motion capture recordings, the system comprising:
- one or more sensors configured to generate output signals conveying information related to motion and/or sound made by one or more users in physical space, the sensors being configured to capture the motion and/or the sound made by the one or more users;
one or more displays that present virtual reality content to one or more users, wherein presentation of the virtual reality content via a display simulates presence of a user within a virtual space that is fixed relative to physical space, wherein the one or more displays are configured to present options for recording the motion and/or the sound for one or more of the characters within the virtual space;
one or more processors configured by machine-readable instructions to;
receive selection of a first character to virtually embody within the virtual space, wherein virtually embodying the first character enables a first user to record the motion and/or the sound to be made by the first character within the compiled virtual reality scene;
receive a first request to capture the motion and/or the sound for the first character;
record first motion capture information characterizing the motion and/or the sound made by the first user as the first user virtually embodies the first character, wherein the first motion capture information is captured in a manner such that actions of the first user are manifested by the first character within the compiled virtual reality scene;
receive selection of a second character to virtually embody, wherein the second character is separate and distinct from the first character, and wherein virtually embodying the second character enables the first user or another user to record one or more of the motion and/or the sound to be made by the second character within the compiled virtual reality scene;
receive a second request to capture the motion and/or the sound for the second character;
record second motion capture information that characterizes the motion and/or the sound made by the first user or other user as the first user or the other user virtually embodies the second character, wherein the second motion capture information is captured in a manner such that actions of the first user or the other user are manifested by the second character contemporaneously with the actions of the first user manifested by the first character within the compiled virtual reality scene; and
generate the compiled virtual reality scene including animation of the first character and the second character such that the first character and the second character appear animated within the compiled virtual reality scene contemporaneously.
2 Assignments
0 Petitions
Accused Products
Abstract
A system configured to generate and/or modify three-dimensional scenes comprising animated character(s) based on individual asynchronous motion capture recordings. The system may comprise sensor(s), display(s), and/or processor(s). The system may receive selection of a first character to virtually embody within the virtual space, receive a first request to capture the motion and/or the sound for the first character, and/or record first motion capture information characterizing the motion and/or the sound made by the first user as the first user virtually embodies the first character. The system may receive selection of a second character to virtually embody, receive a second request to capture the motion and/or the sound for the second character, and/or record second motion capture information. The system may generate a compiled virtual reality scene wherein the first character and the second character appear animated within the compiled virtual reality scene contemporaneously.
-
Citations
20 Claims
-
1. A system configured to generate and/or modify three-dimensional scenes comprising animated characters based on individual asynchronous motion capture recordings, the system comprising:
-
one or more sensors configured to generate output signals conveying information related to motion and/or sound made by one or more users in physical space, the sensors being configured to capture the motion and/or the sound made by the one or more users; one or more displays that present virtual reality content to one or more users, wherein presentation of the virtual reality content via a display simulates presence of a user within a virtual space that is fixed relative to physical space, wherein the one or more displays are configured to present options for recording the motion and/or the sound for one or more of the characters within the virtual space; one or more processors configured by machine-readable instructions to; receive selection of a first character to virtually embody within the virtual space, wherein virtually embodying the first character enables a first user to record the motion and/or the sound to be made by the first character within the compiled virtual reality scene; receive a first request to capture the motion and/or the sound for the first character; record first motion capture information characterizing the motion and/or the sound made by the first user as the first user virtually embodies the first character, wherein the first motion capture information is captured in a manner such that actions of the first user are manifested by the first character within the compiled virtual reality scene; receive selection of a second character to virtually embody, wherein the second character is separate and distinct from the first character, and wherein virtually embodying the second character enables the first user or another user to record one or more of the motion and/or the sound to be made by the second character within the compiled virtual reality scene; receive a second request to capture the motion and/or the sound for the second character; record second motion capture information that characterizes the motion and/or the sound made by the first user or other user as the first user or the other user virtually embodies the second character, wherein the second motion capture information is captured in a manner such that actions of the first user or the other user are manifested by the second character contemporaneously with the actions of the first user manifested by the first character within the compiled virtual reality scene; and generate the compiled virtual reality scene including animation of the first character and the second character such that the first character and the second character appear animated within the compiled virtual reality scene contemporaneously. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method for generating and/or modifying three-dimensional scenes comprising animated characters based on individual asynchronous motion capture recordings, the method being implemented by one or more sensors, displays, and/or processors configured to perform the method, the method comprising:
-
generating, by the one or more sensors, output signals conveying information related to motion and/or sound made by one or more users in physical space, the sensors being configured to capture the motion and/or the sound made by the one or more users; presenting, via the one or more displays, virtual reality content to one or more users, wherein presentation of the virtual reality content via a display simulates presence of a user within a virtual space that is fixed relative to physical space, wherein the one or more displays are configured to present options for recording the motion and/or the sound for one or more of the characters within the virtual space; receiving, at the one or more processors, selection of a first character to virtually embody within the virtual space, wherein virtually embodying the first character enables a first user to record the motion and/or the sound to be made by the first character within the compiled virtual reality scene; receiving, at the one or more processors, a first request to capture the motion and/or the sound for the first character; recording, by the one or more processors, first motion capture information characterizing the motion and/or the sound made by the first user as the first user virtually embodies the first character, wherein the first motion capture information is captured in a manner such that actions of the first user are manifested by the first character within the compiled virtual reality scene; receiving, at the one or more processors, selection of a second character to virtually embody, wherein the second character is separate and distinct from the first character, and wherein virtually embodying the second character enables the first user or another user to record one or more of the motion and/or the sound to be made by the second character within the compiled virtual reality scene; receiving, at the one or more processors, a second request to capture the motion and/or the sound for the second character; recording, by the one or more processors, second motion capture information that characterizes the motion and/or the sound made by the first user or other user as the first user or the other user virtually embodies the second character, wherein the second motion capture information is captured in a manner such that actions of the first user or the other user are manifested by the second character contemporaneously with the actions of the first user manifested by the first character within the compiled virtual reality scene; and generating, by the one or more processors, the compiled virtual reality scene including animation of the first character and the second character such that the first character and the second character appear animated within the compiled virtual reality scene contemporaneously. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification