Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
First Claim
1. A method comprising:
- generating a virtual or augmented reality (VAR) scene based on a first plurality of images captured by a camera of a device;
generating dynamic content for a portion of the VAR scene, the dynamic content comprising a second plurality of images captured by the camera of the device;
detecting a first orientation of the device;
based on detecting the first orientation of the device, presenting a first view of the VAR scene on the device;
detecting a second orientation of the device; and
based on detecting the second orientation of the device, presenting a second view of the VAR scene comprising the dynamic content overlaid over the portion of the VAR scene by overlaying at least one image from the second plurality of images on at least one image from the first plurality of images and varying the at least one image from the second plurality of images.
2 Assignments
0 Petitions
Accused Products
Abstract
The present disclosure relates to techniques for capturing and displaying partial motion in VAR scenes. VAR scenes can include a plurality of images combined and oriented over any suitable geometry. Although VAR scenes may provide an immersive view of a static scene, current systems do not generally support VAR scenes that include dynamic content (e.g., content that varies over time). Embodiments of the present invention can capture, generate, and/or share VAR scenes. This immersive, yet static, view of the VAR scene lacks dynamic content (e.g., content which varies over time). Embodiments of the present invention can efficiently add dynamic content to the VAR scene, allowing VAR scenes including dynamic content to be uploaded, shared, or otherwise transmitted without prohibitive resource requirements. Dynamic content can be captured by device and combined with a preexisting or simultaneously captured VAR scene, and the dynamic content may be played back upon selection.
-
Citations
20 Claims
-
1. A method comprising:
-
generating a virtual or augmented reality (VAR) scene based on a first plurality of images captured by a camera of a device; generating dynamic content for a portion of the VAR scene, the dynamic content comprising a second plurality of images captured by the camera of the device; detecting a first orientation of the device; based on detecting the first orientation of the device, presenting a first view of the VAR scene on the device; detecting a second orientation of the device; and based on detecting the second orientation of the device, presenting a second view of the VAR scene comprising the dynamic content overlaid over the portion of the VAR scene by overlaying at least one image from the second plurality of images on at least one image from the first plurality of images and varying the at least one image from the second plurality of images. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A device comprising:
-
at least one processor; and at least one non-transitory computer readable storage medium storing instructions that, when executed by the at least one processor, cause the device to; generate a virtual or augmented reality (VAR) scene based on a first plurality of images captured by a camera of the device; generate dynamic content that varies over spatial domain of the VAR scene, the dynamic content comprising a second plurality of images captured by the camera of the device; detect a first location of the device; based on detecting the first location of the device, present a first view of the VAR scene comprising a first portion of the dynamic content overlaid over the VAR scene by overlaying at least an image from the second plurality of images on at least one image from the first plurality of images; detect a second location of the device; and based on detecting the second location of the device, present a second view of the VAR scene comprising a second portion of the dynamic content overlaid over the VAR scene by overlaying at least one an additional image from the second plurality of images on at least one image from the first plurality of images. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer readable storage medium storing instructions thereon that, when executed by at least one processor, cause a device to:
-
generate a virtual or augmented reality (VAR) scene based on a first plurality of images captured by a camera of the device; generate dynamic content for a portion of the VAR scene, the dynamic content comprising a second plurality of images captured by the camera of the device; detect a first orientation of the device; based on detecting the first orientation of the device, present a first view of the VAR scene on the device; detect a second orientation of the device; and based on detecting the second orientation of the device, present a second view of the VAR scene comprising the dynamic content overlaid over the VAR scene by overlaying at least one image from the second plurality of images on at least one image from the first plurality of images and varying over time the at least one image from the second plurality of images. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification