Apparatus and Method for Producting Multi-View Contents
First Claim
1. An apparatus for generating multi-view contents, comprising:
- a preprocessing block for performing correction on and removing noise from depth/disparity map data and multi-view images which are inputted from outside to thereby produce corrected multi-view images;
a camera calibration block for calculating camera parameters based on basic camera information and the corrected multi-view images outputted from the preprocessing block, and performing epipolar rectification to thereby produce rectified multi-view images;
a scene model generating block for generating a scene model by using the camera parameters and the epipolar-rectified multi-view images, which are outputted from the camera calibration block, and a depth/disparity map which is outputted from the preprocessing block;
an object extracting/tracing block for extracting an object binary mask, an object motion vector, and a position of an object central point by using the rectified multi-view images outputted from the preprocessing block, the camera parameters outputted from the camera calibration block, and target object setting information outputted from the user interface block;
a real image/computer graphics object compositing block for extracting lighting information of a background image, which is a real image, applying the extracted lighting information when a pre-produced computer graphic is inserted into the real image, and compositing the pre-produced computer graphics model and the real image;
an image generating block for generating stereoscopic images, virtual multi-view images, and intermediate-view images by using the camera parameters outputted from the camera calibration block, the user selected viewpoint information outputted from a user interface block, and the virtual multi-view images corresponding to the user selected viewpoint information; and
the user interface block for converting requirements from a user into internal data and transmitting the internal data to the preprocessing block, the camera calibration block, the scene modeling block, the object extracting/tracing block, the real image/computer graphics object compositing block, and the image generating block.
1 Assignment
0 Petitions
Accused Products
Abstract
Provided are a contents generating apparatus that can support functions of moving object substitution, depth-based object insertion, background image substitution, and view offering upon a user request and provide realistic image by applying lighting information applied to a real image to computer graphics object when a real image is composited with computer graphics object, and a contents generating method thereof. The apparatus includes: a preprocessing block, a camera calibration block, a scene model generating block, an object extracting/tracing block, a real image/computer graphics object compositing block, an image generating block, and the user interface block. The present invention can provide diverse production methods such as testing for the optimal camera viewpoint and scenic structure before contents are actually authored and compositing two different scenes taken in different places into one scene based on a concept of a three-dimensional virtual studio in the respect of a contents producer.
242 Citations
14 Claims
-
1. An apparatus for generating multi-view contents, comprising:
-
a preprocessing block for performing correction on and removing noise from depth/disparity map data and multi-view images which are inputted from outside to thereby produce corrected multi-view images;
a camera calibration block for calculating camera parameters based on basic camera information and the corrected multi-view images outputted from the preprocessing block, and performing epipolar rectification to thereby produce rectified multi-view images;
a scene model generating block for generating a scene model by using the camera parameters and the epipolar-rectified multi-view images, which are outputted from the camera calibration block, and a depth/disparity map which is outputted from the preprocessing block;
an object extracting/tracing block for extracting an object binary mask, an object motion vector, and a position of an object central point by using the rectified multi-view images outputted from the preprocessing block, the camera parameters outputted from the camera calibration block, and target object setting information outputted from the user interface block;
a real image/computer graphics object compositing block for extracting lighting information of a background image, which is a real image, applying the extracted lighting information when a pre-produced computer graphic is inserted into the real image, and compositing the pre-produced computer graphics model and the real image;
an image generating block for generating stereoscopic images, virtual multi-view images, and intermediate-view images by using the camera parameters outputted from the camera calibration block, the user selected viewpoint information outputted from a user interface block, and the virtual multi-view images corresponding to the user selected viewpoint information; and
the user interface block for converting requirements from a user into internal data and transmitting the internal data to the preprocessing block, the camera calibration block, the scene modeling block, the object extracting/tracing block, the real image/computer graphics object compositing block, and the image generating block. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method for generating multi-view contents, comprising the steps of:
-
a) performing correction on and removing noise from depth/disparity map data and multi-view images which are inputted from outside to thereby produce a corrected multi-view images;
b) calculating camera parameters based on basic camera information and the corrected multi-view images and performing epipolar rectification to thereby produce epipolar-rectified multi-view images;
c) generating a scene model by using the camera parameters and the epipolar-rectified multi-view images, which are outputted from the step b), and the preprocessed depth/disparity maps which are outputted from the step a);
d) extracting an object binary mask, an object motion vector, and a position of an object central point by using target object setting information, the corrected multi-view images, and the camera parameters;
e) extracting lighting information of a background image, which is a real image, applying the lighting information extracted when a pre-produced computer graphic is inserted into the real image, and compositing the pre-produced computer graphic and the real image; and
f) generating stereoscopic images, virtual multi-view images, and intermediate-view images by using user selected viewpoint information, the multi-view images corresponding to the user selected viewpoint information, and the camera parameters. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
Specification