3D imaging device and 3D imaging method
First Claim
Patent Images
1. A 3D imaging device for three-dimensionally imaging a subject, and capturing a 3D image formed by a first viewing point image and a second viewing point image, the 3D imaging device comprising:
- an optical system configured to form an optical image of the subject;
an imaging unit configured to generate the 3D image based on the formed optical image;
an imaging parameter obtaining unit configured to obtain an imaging parameter associated with the optical system used when the optical image has been formed;
a display parameter obtaining unit configured to obtain a display parameter associated with an environment in which the 3D image is viewed;
a disparity detection unit configured to detect a disparity for the 3D image;
a recording image generation unit configured to generate a 3D image for recording based on the generated 3D image,a 3D perception determination unit configured to determine, before the 3D image for recording is generated, viewability of three-dimensional viewing of the generated 3D image based on the imaging parameter, the display parameter, and the detected disparity using a predetermined determination condition,the 3D perception determination unit determines whether a scene to be imaged three-dimensionally will be perceived three-dimensionally by determining whether a state of a predetermined subject included in the scene to be imaged satisfies the formula;
|z*(W2/W1)*(L1/L2)*(α
1−
β
1)−
Δ
α
|<
δ
,where L1, W1, α
1, β
1, and z are values during 3D imaging, and L1 is a distance from a line segment SB1 connecting the first viewing point and the second viewing point to a virtual screen, W1 is a width of the virtual screen, α
1 is a disparity angle formed when a point of convergence is formed on a point of intersection between a normal to the virtual screen extending through a midpoint of the line segment SB1 and the virtual screen, β
1 is a disparity angle formed when the point of convergence is on the predetermined subject included in the scene to be imaged, and z is a zoom ratio,where L2, W2, and Δ
α
are values during 3D displaying, and L2 is a distance from a line segment SB2 connecting the first viewing point and the second viewing point to a display screen, W2 is a width of the display screen, and Δ
α
is a disparity adjustment angle corresponding to an amount of disparity adjustment performed during 3D displaying when the disparity adjustment is enabled by image shifting of the first viewing point image and/or the second viewing point image, andδ
is a relative disparity angle defining a 3D viewing enabling range.
3 Assignments
0 Petitions
Accused Products
Abstract
A 3D imaging device determines, during imaging, whether the captured images will be perceived three-dimensionally without causing fatigue while simulating actual human perception. In a 3D imaging device, a display information setting unit obtains a display parameter associated with an environment in which a 3D video is viewed, and a control unit determines during 3D imaging whether a scene to be imaged three-dimensionally will be perceived three-dimensionally based on the viewing environment.
-
Citations
19 Claims
-
1. A 3D imaging device for three-dimensionally imaging a subject, and capturing a 3D image formed by a first viewing point image and a second viewing point image, the 3D imaging device comprising:
-
an optical system configured to form an optical image of the subject; an imaging unit configured to generate the 3D image based on the formed optical image; an imaging parameter obtaining unit configured to obtain an imaging parameter associated with the optical system used when the optical image has been formed; a display parameter obtaining unit configured to obtain a display parameter associated with an environment in which the 3D image is viewed; a disparity detection unit configured to detect a disparity for the 3D image; a recording image generation unit configured to generate a 3D image for recording based on the generated 3D image, a 3D perception determination unit configured to determine, before the 3D image for recording is generated, viewability of three-dimensional viewing of the generated 3D image based on the imaging parameter, the display parameter, and the detected disparity using a predetermined determination condition, the 3D perception determination unit determines whether a scene to be imaged three-dimensionally will be perceived three-dimensionally by determining whether a state of a predetermined subject included in the scene to be imaged satisfies the formula;
|z*(W2/W1)*(L1/L2)*(α
1−
β
1)−
Δ
α
|<
δ
,where L1, W1, α
1, β
1, and z are values during 3D imaging, and L1 is a distance from a line segment SB1 connecting the first viewing point and the second viewing point to a virtual screen, W1 is a width of the virtual screen, α
1 is a disparity angle formed when a point of convergence is formed on a point of intersection between a normal to the virtual screen extending through a midpoint of the line segment SB1 and the virtual screen, β
1 is a disparity angle formed when the point of convergence is on the predetermined subject included in the scene to be imaged, and z is a zoom ratio,where L2, W2, and Δ
α
are values during 3D displaying, and L2 is a distance from a line segment SB2 connecting the first viewing point and the second viewing point to a display screen, W2 is a width of the display screen, and Δ
α
is a disparity adjustment angle corresponding to an amount of disparity adjustment performed during 3D displaying when the disparity adjustment is enabled by image shifting of the first viewing point image and/or the second viewing point image, andδ
is a relative disparity angle defining a 3D viewing enabling range. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A 3D imaging device for three-dimensionally imaging a subject and capturing a 3D image formed by a first viewing point image and a second viewing point image, the 3D imaging device comprising:
-
an imaging unit configured to image the subject viewed from a first viewing point as the first viewing point image and generate a first image signal forming the first viewing point image, and image the subject viewed from a second viewing point as the second viewing point image and generate a second image signal forming the second viewing point image; a disparity detection unit configured to detect a disparity from the first viewing point image and the second viewing point image for each pixel block consisting of one or more pixels; a display parameter obtaining unit configured to obtain a display parameter associated with an environment in which the 3D image is viewed; a disparity map image generation unit configured to generate a two-dimensional disparity map image by mapping a disparity of the each pixel block detected by the disparity detection unit; a disparity angle obtaining unit configured to use disparities for two blocks on the two-dimensional disparity map image to obtain a disparity angle α
1 and a disparity angle α
2 corresponding to the disparities for the two blocks and obtain a distance h between the two blocks on the two-dimensional disparity map image;a correction disparity angle calculation unit configured to calculate a correction disparity angle f(α
1, α
2, h) based on the disparity angles α
1 and α
2 of the two blocks and the distance h between the two blocks on the two-dimensional disparity map image;a correction disparity angle maximum value obtaining unit configured to obtain a maximum value fmax of the correction disparity angle f(α
1, α
2, h) calculated for the two blocks included in the two-dimensional disparity map image; anda 3D perception determination unit configured to compare the maximum value fmax with a disparity angle δ
indicating a 3D viewing enabling range, and, during imaging, determine that the scene to be imaged will be perceived three-dimensionally when determining that the maximum value fmax is less than the disparity angle δ
indicating the 3D viewing enabling range. - View Dependent Claims (9, 10)
-
-
11. A 3D imaging device for three-dimensionally imaging a subject and capturing a 3D image formed by a first viewing point image and a second viewing point image, the 3D imaging device comprising:
-
an imaging unit configured to image the subject viewed from a first viewing point as the first viewing point image and generate a first image signal forming the first viewing point image, and image the subject viewed from a second viewing point different from the first viewing point as the second viewing point image and generate a second image signal forming the second viewing point image; a disparity detection unit configured to detect a disparity from the first viewing point image and the second viewing point image for each pixel block consisting of one or more pixels; a display parameter obtaining unit configured to obtain a display parameter associated with an environment in which a 3D image is viewed; a disparity map image generation unit configured to generate a two-dimensional disparity map image by mapping a disparity of the each pixel block detected by the disparity detection unit; a disparity histogram generation unit configured to generate a disparity histogram that shows a frequency distribution for the disparity of the each pixel block based on the two-dimensional disparity map image; and a 3D perception determination unit configured to determine, during imaging, whether a scene to be imaged will be perceived three-dimensionally based on the disparity histogram, the disparity histogram generation unit performs clustering of the two-dimensional disparity map image, and generates a weighted disparity histogram in which each cluster obtained through the clustering is weighted using the function Weight;
Weight(x, y, z)=Cent(x)*Size(y)*Blur(z),where Cent(x) is a function that yields a larger value as a position of a cluster is nearer a central position of the two-dimensional disparity map image, and x is a two-dimensional vector indicating a position of the cluster on the two-dimensional disparity map image, Size(y) is a function that yields a larger value as an area occupied by the cluster formed by blocks of the two-dimensional disparity map image is greater, and y indicates an area occupied by the cluster formed by the blocks of the two-dimensional disparity map image, Blur(z) is a function that yields a smaller value as a degree of blurring of the cluster is greater, and z indicates the degree of burring of the cluster, and the 3D perception determination unit determines, during imaging, whether the scene to be imaged will be perceived three-dimensionally by comparing a target area AR1 that is an area having a disparity range of B4 to C4 to be subjected to the 3D perception determination process with a 3D viewing enabling area AR0, where C4 is a disparity that initially exceeds a long-range view threshold Th3 when the weighted disparity histogram is traced from a long-range view end toward a short-range view end, and B4 is a disparity that initially exceeds a short-range view threshold Th4 when the weighted disparity histogram is traced from a short-range view end toward a long-range view end. - View Dependent Claims (12, 13, 14, 15, 16)
-
-
17. A 3D imaging method used in a 3D imaging device for three-dimensionally imaging a subject and capturing a 3D image formed by a first viewing point image and a second viewing point image, the 3D imaging device including an optical system configured to form an optical image of the subject, the method comprising:
-
generating the 3D image based on the formed optical image; obtaining an imaging parameter associated with the optical system used when the optical image has been formed; obtaining a display parameter associated with an environment in which the 3D image is viewed; detecting a disparity for the 3D image; generating a 3D image for recording based on the generated 3D image; determining, before the 3D image for recording is generated, viewability of three-dimensional viewing of the generated 3D image based on the imaging parameter, the display parameter, and the detected disparity using a predetermined determination condition; determining whether a scene to be imaged three-dimensionally will be perceived three-dimensionally by determining whether a state of a predetermined subject included in the scene to be imaged satisfies the formula;
|z*(W2/W1)*(L1/L2)*(α
1−
β
1)−
Δ
α
|<
δ
,where L1, W1, α
1, β
1, and z are values during 3D imaging, and L1 is a distance from a line segment SB1 connecting the first viewing point and the second viewing point to a virtual screen, W1 is a width of the virtual screen, α
1 is a disparity angle formed when a point of convergence is formed on a point of intersection between a normal to the virtual screen extending through a midpoint of the line segment SB1 and the virtual screen, β
1 is a disparity angle formed when the point of convergence is on the predetermined subject included in the scene to be imaged, and z is a zoom ratio,where L2, W2, and Δ
α
are values during 3D displaying, and L2 is a distance from a line segment SB2 connecting the first viewing point and the second viewing point to a display screen, W2 is a width of the display screen, and Δ
α
is a disparity adjustment angle corresponding to an amount of disparity adjustment performed during 3D displaying when the disparity adjustment is enabled by image shifting of the first viewing point image and/or the second viewing point image, andδ
is a relative disparity angle defining a 3D viewing enabling range.
-
-
18. A 3D imaging method for three-dimensionally imaging a subject and capturing a 3D image formed by a first viewing point image and a second viewing point image, the method comprising:
-
imaging the subject viewed from a first viewing point as the first viewing point image and generating a first image signal forming the first viewing point image, and imaging the subject viewed from a second viewing point as the second viewing point image and generating a second image signal forming the second viewing point image; detecting a disparity from the first viewing point image and the second viewing point image for each pixel block consisting of one or more pixels; obtaining a display parameter associated with an environment in which the 3D image is viewed; generating a two-dimensional disparity map image by mapping a disparity of the each pixel block detected in the detecting of the disparity; using disparities for two blocks on the two-dimensional disparity map image to obtain a disparity angle α
1 and a disparity angle α
2 corresponding to the disparities for the two blocks and obtaining a distance h between the two blocks on the two-dimensional disparity map image;calculating a correction disparity angle f(α
1, α
2, h) based on the disparity angles α
1 and α
2 of the two blocks and the distance h between the two blocks on the two-dimensional disparity map image;obtaining a maximum value fmax of the correction disparity angle f(α
1, α
2, h) calculated for the two blocks included in the two-dimensional disparity map image; andcomparing the maximum value fmax with a disparity angle δ
indicating a 3D viewing enabling range, and, during imaging, determining that the scene to be imaged will be perceived three-dimensionally when determining that the maximum value fmax is less than the disparity angle δ
indicating the 3D viewing enabling range.
-
-
19. A 3D imaging method for three-dimensionally imaging a subject and capturing a 3D image formed by a first viewing point image and a second viewing point image, the method comprising:
-
imaging a subject viewed from a first viewing point as the first viewing point image and generating a first image signal forming the first viewing point image, and imaging a subject viewed from a second viewing point different from the first viewing point as the second viewing point image and generating a second image signal forming the second viewing point image; detecting a disparity from the first viewing point image and the second viewing point image for each pixel block consisting of one or more pixels; obtaining a display parameter associated with an environment in which a 3D image is viewed; generating a two-dimensional disparity map image by mapping a disparity of the each pixel block detected in the detecting of the disparity; generating a disparity histogram that shows a frequency distribution for the disparity of the each pixel block based on the two-dimensional disparity map image; and determining, during imaging, whether a scene to be imaged will be perceived three-dimensionally based on the disparity histogram, clustering of the two-dimensional disparity map image and generating a weighted disparity histogram in which each cluster obtained through the clustering is weighted using the function Weight;
Weight(x, y, z)=Cent(x)*Size(y)*Blur(z),where Cent(x) is a function that yields a larger value as a position of a cluster is nearer a central position of the two-dimensional disparity map image, and x is a two-dimensional vector indicating a position of the cluster on the two-dimensional disparity map image, Size(y) is a function that yields a larger value as an area occupied by the cluster formed by blocks of the two-dimensional disparity map image is greater, and y indicates an area occupied by the cluster formed by the blocks of the two-dimensional disparity map image, Blur(z) is a function that yields a smaller value as a degree of blurring of the cluster is greater, and z indicates the degree of burring of the cluster, and determining, during imaging, whether the scene to be imaged will be perceived three-dimensionally by comparing a target area AR1 that is an area having a disparity range of B4 to C4 to be subjected to the 3D perception determination process with a 3D viewing enabling area AR0, where C4 is a disparity that initially exceeds a long-range view threshold Th3 when the weighted disparity histogram is traced from a long-range view end toward a short-range view end, and B4 is a disparity that initially exceeds a short-range view threshold Th4 when the weighted disparity histogram is traced from a short-range view end toward a long-range view end.
-
Specification