CALIBRATION FOR IMMERSIVE CONTENT SYSTEMS
First Claim
1. A head mountable display system configured to render and display immersive content, comprising:
- a support adapted to be worn by a user;
one or more electronic displays carried by the support; and
one or more processors programmed to;
access first data including first video image data captured using at least a first camera;
identify a first calibration profile to apply to the first video image data, where the first calibration profile is usable to map pixel groups within the first video image data to respective portions of a two dimensional (2D) texture map, the 2D texture map corresponding to a three-dimensional (3D) surface associated with a physical calibration object;
use at least a 3D digital model of the physical calibration object to graphically render a first representation of the 3D surface;
use at least the first calibration profile to graphically render the first video image data or a portion thereof onto the first representation of the 3D surface, resulting in first rendered content; and
display the first rendered content on the one or more electronic displays.
5 Assignments
0 Petitions
Accused Products
Abstract
Camera and/or lens calibration information is generated as part of a calibration process in video systems including 3-dimensional (3D) immersive content systems. The calibration information can be used to correct for distortions associated with the source camera and/or lens. A calibration profile can include information sufficient to allow the system to correct for camera and/or lens distortion/variation. This can be accomplished by capturing a calibration image of a physical 3D object corresponding to the simulated 3D environment, and creating the calibration profile by processing the calibration image. The calibration profile can then be used to project the source content directly into the 3D viewing space while also accounting for distortion/variation, and without first translating into an intermediate space (e.g., a rectilinear space) to account for lens distortion.
80 Citations
23 Claims
-
1. A head mountable display system configured to render and display immersive content, comprising:
-
a support adapted to be worn by a user; one or more electronic displays carried by the support; and one or more processors programmed to; access first data including first video image data captured using at least a first camera; identify a first calibration profile to apply to the first video image data, where the first calibration profile is usable to map pixel groups within the first video image data to respective portions of a two dimensional (2D) texture map, the 2D texture map corresponding to a three-dimensional (3D) surface associated with a physical calibration object; use at least a 3D digital model of the physical calibration object to graphically render a first representation of the 3D surface; use at least the first calibration profile to graphically render the first video image data or a portion thereof onto the first representation of the 3D surface, resulting in first rendered content; and display the first rendered content on the one or more electronic displays. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A method of generating camera calibration information for use in displaying immersive content, comprising:
-
receiving first digital image data corresponding to a first image captured using a digital video camera and a wide angle lens, the first image taken of a three-dimensional (3D) object having a plurality of markers distributed thereon; accessing a 3D digital model corresponding to the 3D object, the 3D digital model having a two-dimensional (2D) texture map associated therewith that corresponds to a 3D surface of the 3D digital model; identifying a plurality of pixel groups in the first image data using a processor, where each pixel group of the plurality of pixel groups includes one or more pixels of the first image data that correspond to a different one of the plurality of markers; and generating, using a processor, a mapping between the identified pixel groups and the 2D texture map corresponding to the 3D surface, the mapping usable for calibrating second digital image data for display in a simulated 3D viewing environment. - View Dependent Claims (7, 8, 9, 10, 11, 12)
-
-
13. A method of processing video image data for display, comprising:
-
receiving first data including first video image data captured using at least a first camera; using a processor to identify a first calibration profile to apply to the first video image data, where the first calibration profile is usable to map pixel groups within the first video image data to respective portions of a two dimensional (2D) texture map, the 2D texture map corresponding to a three-dimensional (3D) surface associated with a physical calibration object; and with a 3D renderer executing on one or more processors; graphically rendering a first representation of the 3D surface using at least a 3D digital model of the physical calibration object; and graphically rendering the first video image data or a portion thereof onto the first representation of the 3D surface using at least the first calibration profile. - View Dependent Claims (14, 15, 16, 17, 18)
-
-
19. A system, comprising:
one or more computing devices comprising one or more hardware processors, the one or more computing devices programmed to; receive first digital image data corresponding to a first image captured using a digital video camera and a wide angle lens, the first image taken of a three-dimensional (3D) object having a plurality of markers distributed thereon; access a 3D digital model corresponding to the 3D object, the 3D digital model having a two-dimensional (2D) texture map associated therewith that corresponds to a 3D surface of the 3D digital model; process the first image data to identify a plurality of pixel groups in the first image data, where each pixel group of the plurality of pixel groups includes one or more pixels of the first image data that correspond to a different one of the plurality of markers; and generate a mapping between the identified pixel groups and the 2D texture map corresponding to the 3D surface, the mapping usable for calibrating second digital image data for display in a simulated 3D viewing environment. - View Dependent Claims (20, 21, 22, 23)
Specification