Generating content for a virtual reality system
First Claim
1. A method, comprising:
- calibrating camera modules of a camera array by;
estimating errors in a predicted roll, pitch, and yaw of two or more lenses corresponding to two or more camera modules;
determining a position and a rotational offset for each of the two or more lenses; and
determining a relative position of each of two or more lenses;
receiving raw video data from the calibrated camera modules of the camera array;
identifying location and timing associated with each of the calibrated camera modules;
constructing a left camera map that identifies matching camera modules for pixels in a left panoramic image and a right camera map that identifies matching camera modules for pixels in a right panoramic image;
generating, based on the left camera map, a stream of left panoramic images;
generating, based on the right camera map, a stream of right panoramic images;
generating three-dimensional content from the stream of left panoramic images, the stream of right panoramic images, and a stream of three-dimensional audio data;
providing the three-dimensional content to a user through a three-dimensional display;
receiving head tracking information from one or more accelerometers or gyroscopes of a viewing system, where the head tracking information describes a head orientation of the user and a gaze of the user while the user is viewing the three-dimensional content;
detecting the location of the gaze of the user at the three-dimensional content based on the head tracking information;
determining a location of a stitching aberration in the three-dimensional content; and
generating a first advertisement that is stitched into the location of the stitching aberration in the three-dimensional content provided through the three-dimensional display.
2 Assignments
0 Petitions
Accused Products
Abstract
The disclosure includes a system and method for generating virtual reality content. For example, the disclosure includes a method for generating virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data with a processor-based computing device programmed to perform the generating, providing the virtual reality content to a user, detecting a location of the user'"'"'s gaze at the virtual reality content, and suggesting an advertisement based on the location of the user'"'"'s gaze. Another example includes receiving virtual reality content that includes a stream of three-dimensional video data and a stream of three-dimensional audio data to a first user with a processor-based computing device programmed to perform the receiving, generating a social network for the first user, and generating a social graph that includes user interactions with the virtual reality content.
-
Citations
18 Claims
-
1. A method, comprising:
-
calibrating camera modules of a camera array by; estimating errors in a predicted roll, pitch, and yaw of two or more lenses corresponding to two or more camera modules; determining a position and a rotational offset for each of the two or more lenses; and determining a relative position of each of two or more lenses; receiving raw video data from the calibrated camera modules of the camera array; identifying location and timing associated with each of the calibrated camera modules; constructing a left camera map that identifies matching camera modules for pixels in a left panoramic image and a right camera map that identifies matching camera modules for pixels in a right panoramic image; generating, based on the left camera map, a stream of left panoramic images; generating, based on the right camera map, a stream of right panoramic images; generating three-dimensional content from the stream of left panoramic images, the stream of right panoramic images, and a stream of three-dimensional audio data; providing the three-dimensional content to a user through a three-dimensional display; receiving head tracking information from one or more accelerometers or gyroscopes of a viewing system, where the head tracking information describes a head orientation of the user and a gaze of the user while the user is viewing the three-dimensional content; detecting the location of the gaze of the user at the three-dimensional content based on the head tracking information; determining a location of a stitching aberration in the three-dimensional content; and generating a first advertisement that is stitched into the location of the stitching aberration in the three-dimensional content provided through the three-dimensional display. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method, comprising:
-
calibrating camera modules of a camera array by; estimating errors in a predicted roll, pitch, and yaw of two or more lenses corresponding to two or more camera modules; determining a position and a rotational offset for each of the two or more lenses; and determining a relative position of each of two or more lenses; receiving raw video data from the calibrated camera modules of the camera array; identifying location and timing associated with each of the calibrated camera modules; constructing a left camera map that identifies matching camera modules for pixels in a left panoramic image and a right camera map that identifies matching camera modules for pixels in a right panoramic image; generating, based on the left camera map, a stream of left panoramic images; generating, based on the right camera map, a stream of right panoramic images; generating three-dimensional content from the stream of left panoramic images, the stream of right panoramic images, and a stream of three-dimensional audio data; providing the three-dimensional content to a user through a three-dimensional display; receiving head tracking information from one or more accelerometers or gyroscopes of a viewing system, where the head tracking information describes a head orientation of the user and a gaze of the user while the user is viewing the three-dimensional content; detecting the location of the gaze of the user at the three-dimensional content based on the head tracking information; determining a location of a stitching aberration in the three-dimensional content; generating a first advertisement that is stitched into the location of the stitching aberration in the three-dimensional content provided through the three-dimensional display; generating an advertisement log, wherein the advertising log includes various advertisements that the user interacts with within the three-dimensional content; determining a user profile based on the advertisement log; and generating a second advertisement based on the user profile. - View Dependent Claims (15)
-
-
16. One or more non-transitory computer-readable media that include instructions that, when executed by one or more processors, are configured to cause the one or more processors to perform operations, the operations comprising:
-
calibrating camera modules of a camera array by; estimating errors in a predicted roll, pitch, and yaw of two or more lenses corresponding to two or more camera modules; determining a position and a rotational offset for each of the two or more lenses; and determining a relative position of each of two or more lenses; receiving raw video data from the calibrated camera modules of the camera array; identifying location and timing associated with each of the calibrated camera modules; constructing a left camera map that identifies matching camera modules for pixels in a left panoramic image and a right camera map that identifies matching camera modules for pixels in a right panoramic image; generating, based on the left camera map, a stream of left panoramic images; generating, based on the right camera map, a stream of right panoramic images; generating three-dimensional content from the stream of left panoramic images, the stream of right panoramic images, and a stream of three-dimensional audio data; providing the three-dimensional content to a user through a three-dimensional display; receiving head tracking information from one or more accelerometers or gyroscopes of a viewing system, where the head tracking information describes a head orientation of the user and a gaze of the user while the user is viewing the three-dimensional content; detecting the location of the gaze of the user at the three-dimensional content based on the head tracking information; determining a location of a stitching aberration in the three-dimensional content; and generating a first advertisement that is stitched into the location of the stitching aberration in the three-dimensional content provided through the three-dimensional display. - View Dependent Claims (17, 18)
-
Specification