Indirect 3D scene positioning control
First Claim
1. A method, comprising:
- rendering a first image of a virtual scene projected to a virtual render plane from a first projection to be displayed on a screen;
displaying the first image;
viewing the image of the projection from a first eyepoint that is a normal perspective to the screen such that the projected image corresponds to the first perspective;
positioning and orienting a freehand user interface device that contains a physical endpoint, wherein the physical endpoint defines a direction in a physical space relative to the screen, wherein the physical endpoint defines a second perspective that is other than normal to the screen such that the position and orientation of the freehand user interface device defines a second perspective that is distinct from the orientation of the first perspective;
identifying a virtual position/orientation mark in the virtual scene corresponding to the physical end point in the physical space;
identifying a segment in the virtual scene along a path from the mark as suggested by the second perspective direction in the physical space, where the segment suggests a path that is not normal to the render plane; and
interacting with a virtual object from within the virtual scene that correlates to an intersection between the virtual object and the segment, including selecting the virtual object via a selection shape, comprising;
selecting a first point in the virtual scene from the second perspective in accordance with a first intersection;
selecting a second point in the virtual scene from a third perspective in accordance with a second intersection; and
creating a rectangular volume between the first point and the second point, wherein the planes of the rectangular volume are oblique in relation to the virtual render plane, wherein the rectangular volume forms the selection shape.
6 Assignments
0 Petitions
Accused Products
Abstract
Embodiments of the present invention generally relate to interacting with a virtual scene at a perspective which is independent from the perspective of the user. Methods and systems can include either tracking and defining a perspective of the user based on the position and orientation of the user in the physical space, projecting a virtual scene for the user perspective to a virtual plane, tracking and defining a perspective of the a freehand user input device based on the position and orientation of the a freehand user input device, identifying a mark in the virtual scene which corresponds to the position and orientation of the device in the physical space, creating a virtual segment from the mark and interacting with virtual objects in the virtual scene at the end point of the virtual segment, as controlled using the device.
16 Citations
20 Claims
-
1. A method, comprising:
-
rendering a first image of a virtual scene projected to a virtual render plane from a first projection to be displayed on a screen; displaying the first image; viewing the image of the projection from a first eyepoint that is a normal perspective to the screen such that the projected image corresponds to the first perspective; positioning and orienting a freehand user interface device that contains a physical endpoint, wherein the physical endpoint defines a direction in a physical space relative to the screen, wherein the physical endpoint defines a second perspective that is other than normal to the screen such that the position and orientation of the freehand user interface device defines a second perspective that is distinct from the orientation of the first perspective; identifying a virtual position/orientation mark in the virtual scene corresponding to the physical end point in the physical space; identifying a segment in the virtual scene along a path from the mark as suggested by the second perspective direction in the physical space, where the segment suggests a path that is not normal to the render plane; and interacting with a virtual object from within the virtual scene that correlates to an intersection between the virtual object and the segment, including selecting the virtual object via a selection shape, comprising; selecting a first point in the virtual scene from the second perspective in accordance with a first intersection; selecting a second point in the virtual scene from a third perspective in accordance with a second intersection; and creating a rectangular volume between the first point and the second point, wherein the planes of the rectangular volume are oblique in relation to the virtual render plane, wherein the rectangular volume forms the selection shape. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method, comprising:
-
tracking a position and orientation of a first eye-point in a physical space as referenced from a display screen; determining a tracked perspective of the first eye-point from the position and orientation of the first eye-point as referenced from the position and orientation of the display screen; rendering a first image to be displayed on a screen of a virtual scene projected to a virtual render plane from the tracked perspective; viewing the image of the projection from the first eye-point in a physical space, such that the virtual scene corresponds to the tracked perspective; positioning and orienting a freehand user interface device comprising a physical endpoint in a physical space, such that the physical endpoint suggests a direction in the physical space relative to the screen; defining a second perspective based on the position and orientation of the physical endpoint that is other than normal to the first eye-point, such that the freehand user interface device has a second perspective that is distinct from the first perspective; identifying a virtual position and orientation mark in the virtual scene corresponding to the position and orientation of the physical endpoint in the physical space; identifying a segment along a path from the virtual position and orientation mark in the virtual space in the suggested direction; and interacting with a virtual object from within the virtual scene that correlates to an intersection between the object and the segment, including selecting the virtual object via a selection shape, comprising; selecting a first point in the virtual scene from the second perspective in accordance with a first intersection; selecting a second point in the virtual scene from a third perspective in accordance with a second intersection; and creating a rectangular volume between the first point and the second point, wherein the planes of the rectangular volume are oblique in relation to the virtual render plane, wherein the rectangular volume forms the selection shape. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. A system, comprising:
a display device configured to; render a first image to be displayed on a screen of a virtual scene projected to a virtual render plane from a first perspective; and display an image of the projection from a first eye-point in a physical space that is a normal perspective to the screen, such that the image corresponds to the first perspective; a freehand user interface device comprising a physical endpoint in a physical space and configured to select a virtual object from within the virtual scene; and a tracking device configured to; track the position and orientation of a freehand user interface device to determine a suggested direction of the physical end-point in the physical space relative to the screen; define a second perspective based on the position and orientation of the physical endpoint that is other than normal to the screen or the first eye-point, such that the freehand user interface device has a second perspective that is distinct from the first perspective; identify a virtual position and orientation mark in the virtual space corresponding to the position and orientation of the physical endpoint in the physical space; identify a segment along a path from the virtual position and orientation mark in the suggested direction of the physical endpoint; and correlate an intersection between the virtual object and the segment for selection of the virtual object via a selection shape, comprising; selecting a first point in the virtual scene from the second perspective in accordance with a first intersection; selecting a second point in the virtual scene from a third perspective in accordance with a second intersection; and creating a rectangular volume between the first point and the second point, wherein the planes of the rectangular volume are oblique in relation to the virtual render plane, wherein the rectangular volume forms the selection shape. - View Dependent Claims (17, 18, 19, 20)
Specification