Virtual reality and cross-device experiences
First Claim
1. A computer-implemented method comprising:
- capturing, using one or more depth sensing cameras of an augmented reality (AR) device and in response to a user selection of a corresponding option using the AR device while an AR environment is presented on the AR device, an environmental snapshot comprising a two-and-a-half-dimensional (2.5D) image corresponding to the AR environment presented on the AR device, the 2.5D image comprising a bitmap depicting a view of three-dimensional (3D) space of the AR environment and corresponding depth information;
presenting the 2.5D image of the environmental snapshot on a different device than the AR device;
translating a user modification to content associated with the presented 2.5D image into the environmental snapshot based on the depth information;
translating the environmental snapshot comprising the user modification into the AR environment; and
presenting the AR environment comprising the translated user modification.
1 Assignment
0 Petitions
Accused Products
Abstract
The present disclosure provides approaches to facilitating virtual reality and cross-device experiences. In some implementations, an environmental snapshot is captured which includes an image of a virtual reality (VR) environment presented on a VR device and corresponding depth information of the VR environment. The image of the environmental snapshot is presented on a different device than the VR device. A user modification to content associated with the presented image is translated into the environmental snapshot based on the depth information. The environmental snapshot comprising the user modification is translated into the VR environment. The VR environment comprising the translated user modification is presented. The environmental snapshot may correspond to a personal space of a user and may be accessed by another user through a social networking interface or other user networking interface to cause the presentation of the image.
23 Citations
20 Claims
-
1. A computer-implemented method comprising:
-
capturing, using one or more depth sensing cameras of an augmented reality (AR) device and in response to a user selection of a corresponding option using the AR device while an AR environment is presented on the AR device, an environmental snapshot comprising a two-and-a-half-dimensional (2.5D) image corresponding to the AR environment presented on the AR device, the 2.5D image comprising a bitmap depicting a view of three-dimensional (3D) space of the AR environment and corresponding depth information; presenting the 2.5D image of the environmental snapshot on a different device than the AR device; translating a user modification to content associated with the presented 2.5D image into the environmental snapshot based on the depth information; translating the environmental snapshot comprising the user modification into the AR environment; and presenting the AR environment comprising the translated user modification. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computer-implemented system comprising:
-
one or more processors; and one or more computer storage media storing computer-useable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising; presenting a virtual reality (VR) environment in a first graphical user interface; in response to a user selection of a corresponding option associated with the first graphical user interface, saving a two-and-a-half-dimensional (2.5D) image corresponding to the presented VR environment, the 2.5D image comprising a view of three-dimensional (3D) space of the VR environment and corresponding depth information; presenting the saved 2.5D image in a second graphical user interface; translating a user modification to content associated with the presented 2.5D image into the VR environment based on the depth information; and presenting the VR environment comprising the translated user modification. - View Dependent Claims (11, 12, 13, 14, 15)
-
-
16. One or more non-transitory computer-readable media having executable instructions embodied thereon, which, when executed by one or more processors, cause the one or more processors to perform a method, the method comprising:
-
presenting, in a first graphical user interface, an environmental snapshot comprising a two-and-a-half-dimensional (2.5D) image corresponding to a virtual reality (VR) environment, the 2.5D image depicting a view of three-dimensional (3D) space of the VR environment and corresponding depth information; translating a user modification to content associated with the 2.5D image of the presented environmental snapshot into the environmental snapshot based on the depth information; translating the environmental snapshot comprising the user modification into the VR environment; and presenting the VR environment comprising the translated user modification in a second graphical user interface. - View Dependent Claims (17, 18, 19, 20)
-
Specification