Presenting a view within a three dimensional scene
First Claim
Patent Images
1. A method for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, comprising:
- tracking a position of a head of a user using a position input device, wherein said tracking comprises assessing X, Y, and Z coordinates, an angle, and an orientation;
assessing a first eyepoint of a user using a first head position;
presenting the 3D scene by at least one display according to a first viewpoint with respect to the at least one display, wherein said presenting the 3D scene comprises displaying at least one stereoscopic image of the 3D scene by the at least one display, wherein the first viewpoint corresponds to a first eyepoint of a user viewing the 3D scene;
assessing a second eyepoint of a user using a second head position;
presenting the 3D scene by at least one display according to a second viewpoint with respect to the at least one display, wherein the second viewpoint corresponds to a second eyepoint of a user;
determining a first virtual viewpoint within the 3D scene, wherein the first virtual viewpoint is different than the first viewpoint, wherein the first virtual view point corresponds to a first X, Y, and Z location and a first angle and a first orientation in physical space and maps to a first coordinate in the 3D scene, wherein the first X, Y, and Z location and the first angle and the first orientation is assessed using the position input device, wherein the first coordinate comprises a second X, Y, and Z location in the 3D scene, and wherein the first coordinate comprises a second angle and a second orientation in the 3D scene;
presenting the view of the 3D scene by the at least one display according to the first virtual viewpoint, wherein said presenting the view of the 3D scene according to the first virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first viewpoint;
determining a second virtual viewpoint within the 3D scene, wherein the second virtual viewpoint is different than the first virtual viewpoint, wherein the second virtual view point corresponds to a third X, Y, and Z location and a third angle and a third orientation in physical space and maps to a second coordinate in the 3D scene, wherein the third X, Y, and Z location and the third angle and the third orientation is assessed using the position input device, wherein the second coordinate comprises a fourth X, Y, and Z location in the 3D scene, and wherein the second coordinate comprises a fourth angle and a fourth orientation in the 3D scene; and
presenting the view of the 3D scene by the at least one display according to the second virtual viewpoint;
wherein said presenting the view of the 3D scene according to the second virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first or second viewpoint.
7 Assignments
0 Petitions
Accused Products
Abstract
Presenting a view based on a virtual viewpoint in a three dimensional (3D) scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. A virtual viewpoint may be determined within the 3D scene that is different than the first viewpoint. The view of the 3D scene may be presented on the display(s) according to the virtual viewpoint and/or the first view point. The presentation of the view of the 3D scene is performed concurrently with presenting the 3D scene.
189 Citations
34 Claims
-
1. A method for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, comprising:
-
tracking a position of a head of a user using a position input device, wherein said tracking comprises assessing X, Y, and Z coordinates, an angle, and an orientation; assessing a first eyepoint of a user using a first head position; presenting the 3D scene by at least one display according to a first viewpoint with respect to the at least one display, wherein said presenting the 3D scene comprises displaying at least one stereoscopic image of the 3D scene by the at least one display, wherein the first viewpoint corresponds to a first eyepoint of a user viewing the 3D scene; assessing a second eyepoint of a user using a second head position; presenting the 3D scene by at least one display according to a second viewpoint with respect to the at least one display, wherein the second viewpoint corresponds to a second eyepoint of a user; determining a first virtual viewpoint within the 3D scene, wherein the first virtual viewpoint is different than the first viewpoint, wherein the first virtual view point corresponds to a first X, Y, and Z location and a first angle and a first orientation in physical space and maps to a first coordinate in the 3D scene, wherein the first X, Y, and Z location and the first angle and the first orientation is assessed using the position input device, wherein the first coordinate comprises a second X, Y, and Z location in the 3D scene, and wherein the first coordinate comprises a second angle and a second orientation in the 3D scene; presenting the view of the 3D scene by the at least one display according to the first virtual viewpoint, wherein said presenting the view of the 3D scene according to the first virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first viewpoint; determining a second virtual viewpoint within the 3D scene, wherein the second virtual viewpoint is different than the first virtual viewpoint, wherein the second virtual view point corresponds to a third X, Y, and Z location and a third angle and a third orientation in physical space and maps to a second coordinate in the 3D scene, wherein the third X, Y, and Z location and the third angle and the third orientation is assessed using the position input device, wherein the second coordinate comprises a fourth X, Y, and Z location in the 3D scene, and wherein the second coordinate comprises a fourth angle and a fourth orientation in the 3D scene; and presenting the view of the 3D scene by the at least one display according to the second virtual viewpoint; wherein said presenting the view of the 3D scene according to the second virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first or second viewpoint. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A non-transitory computer accessible memory medium storing program instructions for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, wherein the program instructions are executable by a processor to:
-
track a position of a head of a user using a position input device, wherein said tracking comprises assessing X, Y, and Z coordinates, an angle, and an orientation; assess a first eyepoint of a user using a first head position; present the 3D scene via at least one display according to a first viewpoint with respect to the at least one display, wherein said presenting the 3D scene comprises the at least one display displaying at least one stereoscopic image of the 3D scene, and wherein the first viewpoint corresponds to a first eyepoint of a user viewing the 3D scene; assess a second eyepoint of a user using a second head position; present the 3D scene by at least one display according to a second viewpoint with respect to the at least one display, wherein the second viewpoint corresponds to a second eyepoint of a user; determine a first virtual viewpoint within the 3D scene, wherein the virtual viewpoint is different than the first viewpoint, wherein the first virtual view point corresponds to a first X, Y, and Z location and a first angle and a first orientation in physical space and maps to a first coordinate in the 3D scene, wherein the first X, Y, and Z location and the first angle and the first orientation is assessed using the position input device, wherein the first coordinate comprises a second X, Y, and Z location in the 3D scene, and wherein the first coordinate comprises a second angle and a second orientation in the 3D scene; present the view of the 3D scene via the at least one display according to the virtual viewpoint, wherein said presenting the view of the 3D scene according to the first virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first viewpoint; determine a second virtual viewpoint within the 3D scene, wherein the second virtual viewpoint is different than the first virtual viewpoint, wherein the second virtual view point corresponds to a third X, Y, and Z location and a third angle and a third orientation in physical space and maps to a second coordinate in the 3D scene, wherein the third X, Y, and Z location and the third angle and the third orientation is assessed using the position input device, wherein the second coordinate comprises a fourth X, Y, and Z location in the 3D scene, and wherein the second coordinate comprises a fourth angle and a fourth orientation in the 3D scene; and present the view of the 3D scene by the at least one display according to the second virtual viewpoint; wherein said presenting the view of the 3D scene according to the second virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first or second viewpoint. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28, 29)
-
-
30. A system for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, comprising:
-
a processor; an input device configured to provide information to the processor indicating a current viewpoint of the user; at least one display coupled to the processor; a memory medium coupled to the processor which stores program instructions executable to; track a position of a head of a user using a position input device, wherein said tracking comprises assessing X, Y, and Z coordinates, an angle, and an orientation; assess a first eyepoint of a user using a first head position; present the 3D scene via the at least one display according to a first viewpoint with respect to the at least one display, wherein said presenting the 3D scene comprises the at least one display displaying at least one stereoscopic image of the 3D scene, and wherein the first viewpoint corresponds to a first eyepoint of a user viewing the 3D scene; assess a second eyepoint of a user using a second head position; present the 3D scene by at least one display according to a second viewpoint with respect to the at least one display, wherein the second viewpoint corresponds to a second eyepoint of a user; determine a first virtual viewpoint within the 3D scene, wherein the virtual viewpoint is different than the current viewpoint of the user, wherein the first virtual view point corresponds to a first X, Y, and Z location and a first angle and a first orientation in physical space and maps to a first coordinate in the 3D scene, wherein the first X, Y, and Z location and the first angle and the first orientation is assessed using the position input device, wherein the first coordinate comprises a second X, Y, and Z location in the 3D scene, and wherein the first coordinate comprises a second angle and a second orientation in the 3D scene; present the view of the 3D scene via the at least one display according to the virtual viewpoint, wherein said presenting the view of the 3D scene according to the first virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first viewpoint; determine a second virtual viewpoint within the 3D scene, wherein the second virtual viewpoint is different than the first virtual viewpoint, wherein the second virtual view point corresponds to a third X, Y, and Z location and a third angle and a third orientation in physical space and maps to a second coordinate in the 3D scene, wherein the third X, Y, and Z location and the third angle and the third orientation is assessed using the position input device, wherein the second coordinate comprises a fourth X, Y, and Z location in the 3D scene, and wherein the second coordinate comprises a fourth angle and a fourth orientation in the 3D scene; and present the view of the 3D scene by the at least one display according to the second virtual viewpoint; wherein said presenting the view of the 3D scene according to the second virtual viewpoint is performed concurrently with said presenting the 3D scene according to the first or second viewpoint. - View Dependent Claims (31, 32, 33, 34)
-
Specification