Three dimensional rendering of display information using viewer eye coordinates
First Claim
1. A processor-implemented method for rendering information in three dimensions, the method comprising:
- receiving, in the processor, default data indicative of an image from a default virtual camera;
mapping coordinates of each of a first eye and a second eye of a viewer to a world space coordinate system;
refining the mapped coordinates of the first eye and the second eye in the world space coordinate system to account for a distance of the viewer from a display device;
using the processor to generate first data indicative of a first image from a first virtual camera located at the refined mapped coordinates of the first eye;
using the processor to generate second data indicative of a second image from a second virtual camera located at the refined mapped coordinates of the second eye, wherein;
a viewing angle associated with the first virtual camera is offset, in accordance with a first offset, from a viewing angle associated with the default virtual camera;
a viewing angle associated with the second virtual camera is offset, in accordance with a second offset, from a viewing angle associated with the default virtual camera;
a focal distance of the first virtual camera is infinity; and
a focal distance of the second virtual camera is infinity;
using the processor to generate a composite image comprising the first image and the second image; and
using the processor to provide the composite image for rendering using the display device, wherein the composite image is perceivable in three dimensions.
2 Assignments
0 Petitions
Accused Products
Abstract
Game data is rendered in three dimensions in the GPU of a game console. A left camera view and a right camera view are generated from a single camera view. The left and right camera positions are derived as an offset from a default camera. The focal distance of the left and right cameras is infinity. A game developer does not have to encode dual images into a specific hardware format. When a viewer sees the two slightly offset images, the user'"'"'s brain combines the two offset images into a single 3D image to give the illusion that objects either pop out from or recede into the display screen. In another embodiment, individual, private video is rendered, on a single display screen, for different viewers. Rather than rendering two similar offset images, two completely different images are rendered allowing each player to view only one of the images.
46 Citations
20 Claims
-
1. A processor-implemented method for rendering information in three dimensions, the method comprising:
-
receiving, in the processor, default data indicative of an image from a default virtual camera; mapping coordinates of each of a first eye and a second eye of a viewer to a world space coordinate system; refining the mapped coordinates of the first eye and the second eye in the world space coordinate system to account for a distance of the viewer from a display device; using the processor to generate first data indicative of a first image from a first virtual camera located at the refined mapped coordinates of the first eye; using the processor to generate second data indicative of a second image from a second virtual camera located at the refined mapped coordinates of the second eye, wherein; a viewing angle associated with the first virtual camera is offset, in accordance with a first offset, from a viewing angle associated with the default virtual camera; a viewing angle associated with the second virtual camera is offset, in accordance with a second offset, from a viewing angle associated with the default virtual camera; a focal distance of the first virtual camera is infinity; and a focal distance of the second virtual camera is infinity; using the processor to generate a composite image comprising the first image and the second image; and using the processor to provide the composite image for rendering using the display device, wherein the composite image is perceivable in three dimensions. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. An apparatus comprising:
-
a memory configured to store processor-executable instructions; and a processor configured to receive the processor-executable instructions from the memory and to execute the processor-executable instructions to; map coordinates of each of a first eye and a second eye of a viewer to a world space coordinate system; refine the mapped coordinates to account for a distance of the viewer from a display device; generate, from default data indicative of a default image from a default virtual camera; first data indicative of a first image from a first virtual camera having a location determined as a function of the mapped coordinates of the first eye; and second data indicative of a second image from a second virtual camera have a location determined as a function of the mapped coordinates of the second eye, wherein; a viewing angle associated with the first virtual camera is offset, in accordance with a first offset, from a viewing angle associated with the default virtual camera; a viewing angle associated with the second virtual camera is offset, in accordance with a second offset, from a viewing angle associated with the default virtual camera; a focal distance of the first virtual camera is infinity; and a focal distance of the second virtual camera is infinity, generate a composite image comprising the first image and the second image; and
render the composite image, wherein the composite image is perceivable in three dimensions. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. A processor-readable storage medium, wherein the storage medium is not a signal, the storage medium storing processor-executable instructions that, when executed by a processor, cause the processor to perform the steps of:
-
mapping coordinates of each of a first eye and a second eye of a viewer to a world space coordinate system; refining the mapped coordinates to account for a distance of the viewer from a display device; generating, from default data indicative of a default image from a default virtual camera; first data indicative of a first image from a first virtual camera having a location determined as a function of the mapped coordinates of the first eye; and second data indicative of a second image from a second virtual camera have a location determined as a function of the mapped coordinates of the second eye, wherein; a viewing angle associated with the first virtual camera is offset, in accordance with a first offset, from a viewing angle associated with the default virtual camera; a viewing angle associated with the second virtual camera is offset, in accordance with a second offset, from a viewing angle associated with the default virtual camera; a focal distance of the first virtual camera is infinity; and a focal distance of the second virtual camera is infinity; generating a composite image comprising the first image and the second image; and rendering the composite image, wherein the composite image is perceivable in three dimensions. - View Dependent Claims (18, 19, 20)
-
Specification