Calibration for immersive content systems
First Claim
1. A method of processing video image data for display, comprising:
- receiving, at a content playback device, first data including first video image data captured using at least a first camera, said first video image data representing a first video image;
using a processor to identify a first calibration profile corresponding to the first camera, the first calibration profile including a set of UV pairs corresponding to vertices in a 2D texture map, said 2D texture map being a 2D texture map corresponding to a three-dimensional (3D) surface of a 3D model;
with a 3D renderer executing on one or more processors;
graphically rendering the first video image data or a portion thereofonto the 3D surface of the 3D model using at least the firstcalibration profile to map at least a portion of the first video image onto the surface of the 3D model; and
displaying the rendered first video image data or portion thereof on a display.
5 Assignments
0 Petitions
Accused Products
Abstract
Camera and/or lens calibration information is generated as part of a calibration process in video systems including 3-dimensional (3D) immersive content systems. The calibration information can be used to correct for distortions associated with the source camera and/or lens. A calibration profile can include information sufficient to allow the system to correct for camera and/or lens distortion/variation. This can be accomplished by capturing a calibration image of a physical 3D object corresponding to the simulated 3D environment, and creating the calibration profile by processing the calibration image. The calibration profile can then be used to project the source content directly into the 3D viewing space while also accounting for distortion/variation, and without first translating into an intermediate space (e.g., a rectilinear space) to account for lens distortion.
-
Citations
21 Claims
-
1. A method of processing video image data for display, comprising:
-
receiving, at a content playback device, first data including first video image data captured using at least a first camera, said first video image data representing a first video image; using a processor to identify a first calibration profile corresponding to the first camera, the first calibration profile including a set of UV pairs corresponding to vertices in a 2D texture map, said 2D texture map being a 2D texture map corresponding to a three-dimensional (3D) surface of a 3D model; with a 3D renderer executing on one or more processors; graphically rendering the first video image data or a portion thereof onto the 3D surface of the 3D model using at least the first calibration profile to map at least a portion of the first video image onto the surface of the 3D model; and displaying the rendered first video image data or portion thereof on a display. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A content playback device, comprising:
-
an interface configured to receive first data including first video image data captured using at least a first camera, said first video image data representing a first video image; a processor configured to; identify a first calibration profile corresponding to the first camera, the first calibration profile including a set of UV pairs corresponding to vertices in a 2D texture map, said 2D texture map being a 2D texture map corresponding to a three-dimensional (3D) surface of a 3D model; and graphically render the first video image data or a portion thereof onto the 3D surface of the 3D model using at least the first calibration profile to map at least a portion of the first video image onto the surface of the 3D model; and a display configured to display the rendered first video image data or portion thereof on the display. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20, 21)
-
Specification