Method for generating spatial-temporally consistent depth map sequences based on convolution neural networks
0 Assignments
0 Petitions
Accused Products
Abstract
A method for generating spatial-temporal consistency depth map sequences based on convolutional neural networks for 2D-3D conversion of television works includes steps of: 1) collecting a training set, wherein each training sample thereof includes a sequence of continuous RGB images, and a corresponding depth map sequence; 2) processing each image sequence in the training set with spatial-temporal consistency superpixel segmentation, and establishing a spatial similarity matrix and a temporal similarity matrix; 3) establishing the convolution neural network including a single superpixel depth regression network and a spatial-temporal consistency condition random field loss layer; 4) training the convolution neural network; and 5) recovering a depth maps of a RGB image sequence of unknown depth through forward propagation with the trained convolution neural network; which avoids that clue-based depth recovery method is greatly depended on scenario assumptions, and inter-frame discontinuity between depth maps generated by conventional neural networks.
-
Citations
5 Claims
-
1. (canceled)
-
2. :
- A method for generating spatial-temporally consistent depth map sequences based on convolution neural networks, comprising steps of;
1) collecting a training set, wherein each training sample of the training set comprises a continuous RGB (red, green, blue) image sequence of m frames, and a corresponding depth map sequence; 2) processing each image sequence in the training set with spatial-temporal consistency superpixel segmentation, and establishing a spatial similarity matrix S(s) and a temporal similarity matrix S(t); 3) building a convolution neural network structure, wherein the convolution neural network comprises a single superpixel depth regression network with a parameter W, and a spatial-temporal consistency condition random field loss layer with a parameter α
;4) training the convolution neural network established in the step
3) with the continuous RGB image sequence and the corresponding depth map sequence in the training set, so as to obtain the parameter W and the parameter α
; and5) recovering a depth map sequence of a depth-unknown RGB image sequence through forward propagation with the convolution neural network trained; wherein the step
2) specifically comprises steps of;(2.1) processing the continuous RGB image sequence in the training set with the spatial-temporal consistency superpixel segmentation, wherein an input sequence is marked as I=[I1, . . . , Im], where It is a t-th frame of the m frames in total;
the m frames are respectively divided into n1, . . . , nm superpixels by the spatial-temporal consistency superpixel segmentation while a corresponding relation between all superpixels in a later frame and superpixels corresponding to a same object in a former frame is generated;
the whole image sequence comprises n=Σ
t=1mnt superpixels;
marking a real depth at a gravity center of each superpixel p as dp, and defining a ground-truth depth vector of the n superpixels as d=[d1;
. . . ;
dn];(2.2) establishing the spatial similarity matrix S(s) of the n superpixels, wherein S(s) is an n×
n matrix;
Spq(s) represents a similarity relationship of a superpixel p and a superpixel q in one frame, where; - View Dependent Claims (3, 4, 5)
- A method for generating spatial-temporally consistent depth map sequences based on convolution neural networks, comprising steps of;
Specification