Structural Light Parameter Calibration Device and Method Based on FrontCoating Plane Mirror

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A structural light parameter calibration device, comprising:
 a camera, a laser, a frontcoating plane minor, a flat glass target, and white printing paper, wherein;
the white printing paper is used to receive a beam projected by the laser and present a real light stripe image;
the laser is a line laser which is used to project the beam onto the white printing paper to form the real light stripe image;
the camera is an area array camera which is used to simultaneously capture the real light stripe image projected by the laser on the white printing paper and a mirror light stripe image in the frontcoating plane minor, and calculate a rotation matrix and a translation vector between a coordinate system of the frontcoating plane minor and a coordinate system of the camera;
feature points around a front surface of the frontcoating plane mirror are used to calculate the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the camera;
coated film in a central area of the front surface of the frontcoating plane mirror is used to mirror the real light stripe image projected on the white printing paper and feature points on the flat glass target;
the flat glass target is used to provide constraints for optimizing the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the camera, and to calculate a vanishing point of an image, and match the real light stripe image and the minor light stripe image by using the vanishing point.
1 Assignment
0 Petitions
Accused Products
Abstract
The present invention discloses a structural light parameter calibration device and method based on a frontcoating plane mirror. The calibration device includes a camera, a laser, a frontcoating plane mirror, a flat glass target, and white printing paper. The white printing paper receives a laser beam and presents a real light strip image, and the camera captures the real light stripe image and a mirror light stripe image on the frontcoating plane mirror. Feature points are used to determine a rotation matrix, a translation vector, and a vanishing point for the image. The present invention achieves better quality of light stripe and better extraction accuracy, provides feature points with micronlevel positional accuracy and more calibration points, and features higher calibration accuracy and more stable calibration results.
0 Citations
No References
No References
11 Claims
 1. A structural light parameter calibration device, comprising:
 a camera, a laser, a frontcoating plane minor, a flat glass target, and white printing paper, wherein;
the white printing paper is used to receive a beam projected by the laser and present a real light stripe image; the laser is a line laser which is used to project the beam onto the white printing paper to form the real light stripe image; the camera is an area array camera which is used to simultaneously capture the real light stripe image projected by the laser on the white printing paper and a mirror light stripe image in the frontcoating plane minor, and calculate a rotation matrix and a translation vector between a coordinate system of the frontcoating plane minor and a coordinate system of the camera; feature points around a front surface of the frontcoating plane mirror are used to calculate the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the camera;
coated film in a central area of the front surface of the frontcoating plane mirror is used to mirror the real light stripe image projected on the white printing paper and feature points on the flat glass target;the flat glass target is used to provide constraints for optimizing the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the camera, and to calculate a vanishing point of an image, and match the real light stripe image and the minor light stripe image by using the vanishing point.  View Dependent Claims (2, 3, 4, 5, 11)
 a camera, a laser, a frontcoating plane minor, a flat glass target, and white printing paper, wherein;
 6. A structural light parameter calibration method based on a frontcoating plane mirror, comprising steps of:
a. calibrating internal parameters of a camera in a line structured light vision sensor;
placing a flat glass target and the frontcoating plane mirror in a clear imaging area in front of the camera, adjusting a brightness of a light source, and capturing a real feature point image and a mirror feature image on the flat glass target; and
correcting the real feature point image and the mirror feature image;b. establishing a coordinate system of a real camera, a coordinate system of a mirror camera, a coordinate system of the frontcoating plane mirror, and a coordinate system of an inverse plane mirror;
solving a rotation matrix and a translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the real camera;
solving a rotation matrix and a translation vector between the coordinate system of the mirror camera and the coordinate system of the real camera, solving a transformation relationship between a lefthanded image coordinate system and a righthand image coordinate system, and establishing a virtual binocular measurement model;
obtaining an optimal solution of the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the real camera by using a nonlinear optimization method;c. calculating a distance between adjacent feature points in a horizontal direction and a vertical direction of a flat glass target;
determining and selecting candidate feature points based on a threshold, and matching the candidate feature points;
obtaining an image vanishing point by using a least square method; andd. placing white printing paper in a clear imaging area in front of the camera for multiple times, extracting a center of a real light stripe image and a center of a mirror light stripe image respectively, matching subpixels of the center of the real light stripe image and the center of the mirror light stripe image according to the vanishing point, calculating threedimensional coordinates of a center point of the real light stripe image using the virtual binocular measurement model, fitting a light plane by using a least square method, and solving light plane parameters.  View Dependent Claims (7, 8, 9, 10)
1 Specification
This application is a national phase entry of, and claims priority to, International Application No. PCT/CN2018/079922, filed Mar. 22, 2018 which claims priority to Chinese patent application number 201710436967.8, filed on Jun. 12, 2017, each of which is incorporated herein by reference in its entirety.
The invention relates to the field of visual measurement, and more specifically, to a structural light parameter calibration device and method.
Structural light threedimensional vision measurement is widely used in industrial measurement and other fields because of the advantages of noncontact, fast speed and moderate accuracy. The calibration accuracy of a structured light vision sensor determines the final detection accuracy level of structural light threedimensional vision measurement. The line structured light vision sensor calibration process includes two aspects of cameral internal parameter calibration and light plane parameter calibration. The calibration process mainly uses the internal parameters of the camera and other auxiliary tools to determine the plane equation of the light plane in the camera coordinate system.
Commonlyused calibration methods of structured light sensors mainly include a free moving target method and a mechanical motion adjustment method. The free moving target method usually uses a onedimensional target, a twodimensional planar target, or a threedimensional target to complete calibration of optical plane parameters. This method features easy target processing and high calibration accuracy and efficiency and is thus widely used. The mechanical motion adjustment method usually uses mechanical motion platform with an encoder, mechanical arms and other devices to calibrate structured light sensors. This method has many processes requiring manual adjustment, and the accuracy thereof mainly depends on the accuracy of the mechanical motion platform.
Chinese Patent Application No. CN200810239083.4 discloses a method for calibrating structured light parameters based on a onedimensional target. The method uses at least three feature points with known spatial constraints of a onedimensional target, and is combined with a perspective projection equation, to calculate spatial threedimensional coordinates of the feature points in a camera system coordinate system according to the length constraints and direction constraints of the feature points and fit the spatial threedimensional coordinates to obtain the light plane equation. This method requires high processing accuracy for onedimensional targets and is therefore sensitive to image noise.
Chinese Patent Application No. CN200710121397.X discloses a calibration method for structural parameters of a structured light vision sensor. The method uses a plane target with multiple nonlinear feature points, obtains the center of a light stripe and coordinates of nonlinear feature points on a target image by moving a position of a plane target for many times, calculates the threedimensional coordinates of the center point of the light stripe in the camera coordinate system by homography matrix, and fits the light plane equation according to the threedimensional coordinates. This method is widely used because of its high calibration efficiency and high precision. However, this method cannot extract highprecision feature points while extracting highquality light stripe images.
Chinese Patent Application No. CN201510307016.1 discloses a calibration method of a line structure light vision sensor based on a parallel double cylindrical target. The method uses a freely moving parallel double cylindrical target, places the target at least once in a suitable position, to extract the center of the light stripe image and to fit the elliptical image of the light stripe in the image. This method establishes an equation between two spatial ellipses and their corresponding images based on perspective projection transformation, solves the light plane equation by constraints that the elliptical short axis is the same as the diameter of the cylinder. This method requires a highprecision threedimensional calibration target and high processing cost, and it is difficult to obtain a highquality calibration image due to factors such as reflection and occlusion.
From the above analysis, it can be seen that existing structural light parameter calibration methods require highprecision targets with feature points or highprecision geometrically constrained targets. However, due to the current level of material processing technology, it is difficult to achieve the positional accuracy or geometric constraint accuracy of the feature points to the micron level while ensuring the image quality of the light stripe, and it may cause deviation if the transformation matrix is calculated by using an image homography matrix method. When a laser stripe is projected on metal or a target such as a ceramic target, the extraction accuracy of the center of the stripe may be deteriorated due to strong reflection or diffuse reflection.
It would be desirable to improve the devices and methods for structural light parameter calibration to overcome these and other deficiencies of conventional designs.
An object of the present invention is to provide a structural light parameter calibration device based on a frontcoating plane mirror, which can realize fast and highlyprecise calibration of light plane parameters of a structured light sensor, and simple manufacturing and maintenance of its target and easy onsite operation.
In order to achieve the object, the present invention according to one embodiment includes: A structural light parameter calibration device and method based on a frontcoating plane mirror, where the method includes: placing the frontcoating plane mirror and a flat glass target in front of a camera, capturing, by the camera, a feature point image and a mirror image on the flat glass target simultaneously, establishing a virtual binocular measurement model, using a spatial distance of adjacent feature points of the flat glass target as a constraint, solving an optimal solution of a rotation matrix and a translation vector between a coordinate system of a plane mirror and a coordinate system of a real camera by using a nonlinear optimization method, and obtaining an image vanishing point of a candidate feature point by using a least square method; and placing white printing paper in front of the frontcoating plane mirror, projecting, by a laser, a beam onto white printing paper, matching subpixels by using image vanishing points and a linear interpolation methods to obtain matching points capturing, by the camera, a real light stripe image, calculating threedimensional coordinates of the matching points, and fitting to solve a light plane equation by using a least square method.
In another embodiment, the present invention provides a structural light parameter calibration device, where the device includes: an area array camera, a line laser, a frontcoating plane mirror, a flat glass target, and white printing paper, where: the area array camera is used to simultaneously capture a light stripe image projected by the laser on the white printing paper and a mirror image of a light stripe in the frontcoating plane mirror, and calculate a rotation matrix and a translation vector between a coordinate system of the plane mirror and a coordinate system of the camera; the line laser is used to project a laser beam onto the white printing paper to form the light stripe; feature points around the front surface of the frontcoating plane mirror are used to calculate the rotation matrix and the translation vector between the coordinate system of the plane mirror and the coordinate system of the camera; coated film in the central area of the front surface of the frontcoating plane mirror is used to mirror the laser light stripe projected on the white printing paper and feature points on the flat glass target; the flat glass target is used to optimize the rotation matrix between the coordinate system of the plane mirror and the coordinate system of the camera, provide constraints for the translation vector, calculate a vanishing point of an image, match two light stripe images by using the vanishing point; and the white printing paper is used to receive the beam projected by the laser and present the light stripe image. The white printing paper has a flat surface without apparent visible crease and is nonreflective and lightproof.
In one aspect, the frontcoating plane mirror, the flat glass target and the white printing paper together constitute a target device for calibrating structured light parameters of a light plane. The central portion of the front surface of the plane mirror is coated with an aluminum film or a silver film. The peripheral area of the front surface of the plane mirror has photolithography feature points. Compared with the plane mirror whose back surface is coated with film, the frontcoating plane mirror cannot be affected by the refraction phenomenon caused by the thickness of the flat glass target to improve the calibration accuracy. The feature points around the frontcoating plane mirror are in the form of a checkerboard, a dot matrix or a grid.
In another aspect, a structural light parameter calibration method includes steps of: a. calibrating internal parameters of a camera in a line structured light vision sensor; placing a flat glass target and the frontcoating plane mirror in a clear imaging area in front of the camera, adjusting a brightness of a light source, and capturing a real feature point image and a mirror feature image on the flat glass target; and correcting the real feature point image and the mirror feature image; b. establishing a coordinate system of a real camera, a coordinate system of a mirror camera, a coordinate system of the frontcoating plane mirror, and a coordinate system of an inverse plane mirror; solving a rotation matrix and a translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the real camera; solving a rotation matrix and a translation vector between the coordinate system of the mirror camera and the coordinate system of the real camera, solving a transformation relationship between a lefthanded image coordinate system and a righthand image coordinate system, and establishing a virtual binocular measurement model; obtaining an optimal solution of the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the real camera by using a nonlinear optimization method; c. calculating a distance between adjacent feature points in the horizontal direction and the vertical direction of a flat glass target; determining and selecting candidate feature points based on a threshold, and matching the candidate feature points; obtaining an image vanishing point by using a least square method; and d. placing white printing paper in a clear imaging area in front of the camera for multiple times, extracting the center of a real light stripe image and the center of a mirror light stripe image respectively, matching subpixels of the center of the real light stripe and the center of the mirror light stripe according to the vanishing point, calculating threedimensional coordinates of the center point of the light stripe using the virtual binocular measurement model, fitting a light plane by using a least square method, and solving light plane parameters.
In a further aspect, the step a. further includes steps of: placing a flat glass target capable of freely moving in the clear imaging area in front of the camera; forming an angle between the frontcoating plane mirror and the flat glass target; ensuring that feature points on the flat glass target and mirrored feature points thereof, as well as most of feature points of the frontcoating plane mirror are located in the clear imaging area; and adjusting brightness of light sources of the frontcoating plane mirror and the flat glass target separately, such that the feature points thereon are clearly imaged, the width of the feature points and the width of edge pixels are 13 pixels, and undistorted images are obtained by using camera internal calibration parameters and the image correction method.
In another aspect, the step b. further includes steps of: (1) establishing a righthanded coordinate system for the coordinate system of the real camera, the coordinate system of the mirror camera, the coordinate system of the frontcoating plane mirror, and the coordinate system of the inverse plane mirror, where an origin of a coordinate system of a real camera image is in the upper left corner of the image, and an origin of a coordinate system of a mirror camera image is in the upper right corner of the image; (2) after the feature points on the frontcoating plane mirror are extracted, calculating a homography matrix by using a camera imaging model, and then solving the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the real camera; (3) respectively solving the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the inverse plane mirror, and the rotation matrix and the translation vector between the coordinate system of the inverse plane mirror and the coordinate system of the mirror camera by using a mirror principle, and solving the rotation matrix and the translation vector between the coordinate system of the mirror camera and the coordinate system of the real camera by using the aboveobtained rotation matrixes and the translation vectors; (4) converting the lefthanded image coordinate system to the righthand image coordinate system, establishing image coordinates of a mirrored light stripe captured by the real camera in the lefthanded coordinate system due to mirroring reasons, while maintaining an ordinate of the principal point of the image unchanged, establishing image coordinates of a mirror stripe in the righthand image coordinate system, and establishing the virtual binocular measurement model based on the transformation matrix and the translation vector between the coordinate system of the mirror camera and the coordinate system of the real camera solved in step (3) as well as an internal parameter matrix obtained by camera calibration; and (5) using a minimum distance between a measured value and a true value of adjacent feature points in the horizontal direction and the vertical direction on the flat glass target as a spatial distance constraint, obtaining the optimal solution of the rotation matrix and the translation vector between the coordinate system of the frontcoating plane mirror and the coordinate system of the real camera, and the optimal solution of the rotation matrix and the translation vector between the coordinate system of the real camera and the coordinate system of the mirror camera by using a LevenbergMarquardt nonlinear optimization method.
In yet another aspect, the step c. further includes steps of: (1) extracting and matching the real feature points on the plane glass target and the mirror feature points; according to the optimal solution of the rotation matrix and the translation vector between the coordinate system of the real camera and the coordinate system of the mirror camera, substituting the image coordinates of the real feature points on the plane glass target and the image coordinates of the corresponding mirror feature points into the virtual binocular measurement model established in step b respectively, to calculate threedimensional coordinates of the real feature points on the flat glass target in coordinate system of the real camera; calculating a spacing of two adjacent feature points in the horizontal direction and in the vertical direction on the flat glass target respectively; and selecting a feature point whose spacing is smaller than a set spacing threshold as a real candidate feature point; (2) connecting the real candidate feature point on the flat glass target and the mirror candidate feature point corresponding to the real candidate feature points with a line, using this line as a matching polar line, taking a distance from the image vanishing point to the polar line as an objective function, and solving an intersection of all lines by using a linear least square method, where the intersection is the image vanishing point.
In a further aspect, the step d. also includes steps of: (1) securing the frontcoating plane mirror in the clear imaging area in front of the camera; adjusting the brightness of the light source to make the feature points on the frontcoating plane mirror clear, where each edge pixels occupies 13 pixels; placing the white printing paper in the clear imaging area in front of the camera for multiple times; projecting the laser beam onto the white printing paper and forming a clear and complete mirror image in front of the frontcoating plane mirror; when the white printing paper is placed at each position, the camera simultaneously capturing a real light stripe image projected on the white printing paper and a mirror light stripe image mirrored by the frontcoating plane mirror, and using the same as a calibration image; (2) extracting a center of a light stripe of the real light stripe and the mirrored light stripe on the calibration image respectively by using a Steger method; connecting the center of the real light stripe and the image vanishing point obtained in step c with a line, and using this line as a polar line of the center of the current light stripe; and locating two points on the center of the mirrored light stripe closest to the polar line as two candidate points and connecting the two candidate points, and using an intersection of the polar line and the line connecting the two candidate points as a subpixel matching point of the center of the real light stripe; and (3) substituting image coordinates of the center of the real light stripe and image coordinates of the subpixel matching point into the virtual binocular measurement model established in step b, to calculate threedimensional coordinates of the center point of the light stripe, and fitting a light plane equation aX+bY+cZ+d=0 by using the least square method, where a, b, c, d in the equation are the solved light plane parameters.
It will be understood that the present invention achieves the following technical objectives and advantages. The present invention provides a device based on a frontcoating plane mirror to calibration line structure light sensor parameters, the device uses the LED light source target for structure light calibration and provides feature points with micronlevel positional accuracy. However, it is difficult to achieve micronlevel positional accuracy for metal targets or ceramic targets while ensuring that the stripes are extractable. Compared with a method in which the flat target obtains the positional relationship between target coordinates and a camera coordinate system through a homography relationship, the system and method of the present invention obtains a more precise positional relationship between the target coordinate system and the camera coordinate system by using a stereo optimization method, the present invention uses image information instead of a camera polar constraint to achieve subpixel matching of the center of the stripe; therefore, the present invention improves the accuracy of subpixel matching. Compared with the metal target, white printing paper provides better image quality of light stripe and better extraction accuracy of the center of the light stripe; in addition, more calibration points can be provided to ensure the stability of the calibration results, and the frontcoating plane mirror can effectively eliminate the effects of glass refraction. By the above hardware improvement and algorithm innovation, the present invention realizes highprecision and highstability line structure light sensor calibration, and it is easy to process and operate the frontcoating plane mirror. The device and the method provided by the present invention can achieve fast realtime calibration of structural light parameters.
Various additional features and advantages of the invention will become more apparent to those of ordinary skill in the art upon review of the following detailed description of one or more illustrative embodiments taken in conjunction with the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrates one or more embodiments of the invention and, together with the general description given above and the detailed description given below, explains the one or more embodiments of the invention.
Embodiments of the invention are illustrated below with reference to the accompanying drawings. The preferred embodiments described here are used only to describe and explain the present disclosure, but not to limit the present disclosure. Embodiments of the invention are configured to: separate a feature point image from a light stripe image based on a frontcoating plane mirror and white printing paper; simultaneously capture a feature point image and the light stripe image to obtain images with high quality, and to provide feature point coordinates with high positional accuracy as well as the center coordinates of the light stripe with high extraction accuracy; use a symmetric virtual binocular measurement system consisting of a single camera and mirror to calibrate light plane parameters of the structural light; match the center of the light stripe based on an image vanishing point; and use the frontcoating plane mirror to eliminate the effects of glass refraction. The present invention improves the calibration accuracy of the structural light parameters in a plurality of ways.
Based on the structured light parameter calibration device, in combination with a specific implementation manner, the present invention will be further described in detail by taking a video camera and a line laser as an example in one embodiment.
As shown in
Step 11: Calibrate internal parameters of a camera in a line structured light vision sensor.
Calibrating the camera in the line structured light vision sensor is equivalent to solving the internal parameters of the camera, which is:
In the formula (1), α_{x}=f/d_{x}, α_{y}=f/d_{y}, α_{x }is the scale factor of the u axis, α_{y }is the scale factor of the v axis, and the scale factor is also called the effective focal length. d_{x }is the pixel spacing in the horizontal direction, d_{y }is the pixel spacing in the vertical direction; u_{0 }and v_{0 }represent the optical center, also called the principal point coordinates; γ is a nonperpendicular factor for the uaxis and the vaxis. (û, {circumflex over (v)}) is the real pixel coordinate, (u, v) is the ideal pixel coordinate; k_{1}, k_{2 }are radial distortion parameters. The specific solution method of each parameter is described in detail in Zhang Zhengyou'"'"'s article “A flexible new technique for camera calibration [R]. Microsoft Corporation, NSRTR9871, 1998.”
Step 12: Place in front of the camera a flat glass target which can move freely in a clear imaging area. There is a preset angle between the position of the frontcoating plane mirror and the position of the flat glass target. The flat glass target is mirrored, to make sure that the feature points in the flat glass target and the feature points in the frontcoating plane mirror are simultaneously imaged in the camera. The brightness of the light source of the frontcoating plane mirror and the flat glass target are adjusted separately. The camera takes images, the camera calibration parameters obtained in step 11 are used to correct the image to obtain an undistorted image, and the process is shown in
Step 13: Establish a coordinate system of a real camera, a coordinate system of a mirror camera, a coordinate system of a plane mirror, and a coordinate system of an inverse plane mirror.
Establish the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C}, where O_{C }is the origin of the coordinate system of the real camera. X_{C}Y_{C}Z_{C }represents the coordinate axes in three directions of the coordinate system of the real camera. Establish the coordinate system of the mirror camera O_{V}X_{V}Y_{V}Z_{V}. In order to solve the conversion relationship between coordinate systems easily, the coordinate system of the mirror camera O_{V}X_{V}Y_{V}Z_{V }is established as a righthand coordinate system, where O_{V }is the origin of the coordinate system of the mirror camera. X_{V}Y_{V}Z_{V }represents the coordinate axes in three directions of the coordinate system of the mirror camera. Establish the coordinate system of the plane mirror O_{m}X_{m}Y_{m}Z_{m}, where O_{m }is the origin of the coordinate system of the plane mirror, X_{m}Y_{m}Z_{m }represents the coordinate axes in three directions of the coordinate system of the plane mirror. Establish the coordinate system of the inverse plane mirror O_{mm}X_{mm}Y_{mm}Z_{mm}, where O_{mm }is the origin of the coordinate system of the inverse plane mirror. X_{mm}Y_{mm}Z_{mm }represents the coordinate axes in three directions of the coordinate system of the inverse plane mirror.
Step 14: Establish a rotation matrix and a vector matrix of positional relations between the coordinate system of the real camera and the coordinate system of the mirror camera based on a mirror principle and perspective projection transformation.
Step 141: Solve an initial value of the rotation matrix and the initial value of the translation matrix between the coordinate system of the plane mirror O_{m}X_{m}Y_{m}Z_{m }and the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C}.
The specific method is as follows: R_{C}^{m }is the rotation matrix that transform from the coordinate system of the plane mirror O_{m}X_{m}Y_{m}Z_{m }to the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C}. t_{C}^{m }is the translation matrix that transform from the coordinate system of the plane mirror O_{m}X_{m}Y_{m}Z_{m }to the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C}. Same as the method of monocular camera calibration, the method for solving rotation matrix and translation matrix is introduced in Zhang Zhengyou'"'"'s article “A flexible new technique for camera calibration, IEEE Trans Pattern Anal. Mach. Intell., 22(11), pp. 13301334 (2000).” The method for extracting the checkerboard corner points is referred in the article “A New SubPixel Detector for XCorner in Camera Calibration Targets[C], WSCG '"'"'2005 short Papers Proceedings, 13^{th }International Conference in Central Europe on Computer Graphic, Visualization and Computer Vision, 2005, Plzen, Czech Republic.”
Step 142: Solve the positional relationship between the coordinate system of the real camera and the coordinate system of the mirror camera.
The specific method is as follows: The method of establishing positional relationship between the coordinate system of the real camera and the coordinate system of the mirror camera is described in the article “Zhenying Xu, Yun Wang, Chuan Yang, Multicamera global calibration for largescale measurement based on plane mirror, Optik, 126(2015), 41494154,” and the article “Guangjun Zhang, Xiuzhi Li., A calibration method for foot and eye of Mobile robot, Robot 2007.29(3).” The following formula can be obtained from the articles:
R_{V}^{C}=R_{V}^{m}R_{m}^{C}=R_{V}^{m}(R_{C}^{m})^{−1}, t_{V}^{C}=t_{V}^{m}−R_{V}^{m}(R_{C}^{m})^{−1}t_{C}^{m} (2)
R_{C}^{V}(R_{V}^{C})^{−1}, t_{C}^{V}=−(R_{V}^{C})^{−1}·t_{V}^{C} (3)
Where, R_{V}^{C }is the rotation matrix which transforms from the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C }to the coordinate system of the mirror camera O_{V}X_{V}Y_{V}Z_{V}. t_{V}^{C }is the translation vector which transforms from the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C }to the coordinate system of the minor camera O_{V}X_{V}Y_{V}Z_{V}. R_{C}^{V }is the rotation matrix which transforms from the coordinate system of the minor camera O_{V}X_{V}Y_{V}Z_{V }to the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C}. t_{V}^{C }is the translation matrix which transforms from the coordinate system of the minor camera O_{V}X_{V}Y_{V}Z_{V }to the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C}.
Step 15: Convert the lefthand and righthand image coordinate system, and establish a virtual binocular measurement model.
Step 151: Convert the lefthand image coordinate system to the righthand image coordinate system.
The specific conversion method is as follows: (u, v) is the imaging point of the center point on the line of the real light stripe in the image coordinate system of the real camera, (u′, v′) is the imaging point of the center point on the line of the mirror light stripe in the image coordinate system of the real camera, the principle is to shoot a real stripe image by a minor camera. Due to the principle of mirror symmetry, the image coordinates at this time are same as the coordinates imaged by the mirror camera in the lefthanded coordinate system. In order to facilitate the measurement of the binocular system, the lefthanded coordinate system of the mirror camera is converted into a righthand coordinate system. The transformation formula of the image coordinate system is:
Then have:
Herein, K_{C }is an internal parameter in the righthanded coordinate system established by the real camera. K_{V }is an internal parameter in the lefthanded coordinate system established by the mirror camera. (u_{c},v_{c}) is the image coordinates obtained by the real camera. (u_{v},v_{v}) is the image coordinates in the righthand coordinate system captured by the mirror camera. s is the distortion factor, d_{x }is the pixel spacing in the horizontal direction, d_{y }is the pixel spacing in the vertical direction; u_{0 }and v_{0 }represent the optical center, also called the principal point coordinates;
Step 152: Establish a virtual binocular measurement model.
The specific method is as follows: The coordinate system of the real camera is taken as the world coordinate system. According to the binocular measurement principle, the following formula can be obtained:
Here (u,v,1)^{T }is the homogeneous coordinate of the center of the real light stripe, (u′,v′,1)_{T }is the homogeneous coordinate of the center of the mirror light stripe, z_{1}, z_{2 }are scale Factor,
H is the projection matrix of the real camera, M is the projection matrix of the mirror camera, [X,Y,Z,1]^{T }is the homogeneous coordinate of the threedimensional coordinates of the center of the real light stripe.
Step 16: After the nonlinear optimization with LevenbergMarquardt method, obtain the optimal solution of rotation matrix and the optimal solution of translation vector between the coordinate system of the plane mirror O_{m}X_{m}Y_{m}Z_{m }to the coordinate system of the real camera O_{C}X_{C}Y_{C}Z_{C}. The optimization objective function is as follows:
Here ε={tilde over (R)}_{C}^{m},{tilde over (t)}_{C}^{m}, (R_{C}^{m},t_{C}^{m}) is the transformation matrix that from plane mirror coordinate system to camera coordinate system, {tilde over (R)}_{C}^{m},{tilde over (t)}_{C}^{m }is the optimal solution of (R_{C}^{m},t_{C}^{m}). {circumflex over (D)}_{C}({tilde over (R)}_{C}^{m},{tilde over (t)}_{C}^{m}) is the spatial distance of adjacent feature points measured on a flat glass target, D_{C}, D_{V }are the true distance of adjacent feature points, {tilde over (D)}_{V}(R_{C}^{m},t_{C}^{m}) is the spatial distance of mirror adjacent feature points measured on a flat glass target, i(i=1, 2, . . . m−1) is the number of horizontal intervals of the feature points, j(j=1, 2, . . . n−1) is the number of vertical intervals of the feature points.
Step 17: Find the image vanishing point based on the candidate feature point by using a least square method.
The specific method is as follows: {tilde over (m)}_{C}^{i}=(u_{C}^{i},v_{C}^{i},1)^{T }is the homogeneous coordinate of the feature points on the flat glass target, {tilde over (m)}_{V}^{i}=(u_{V}^{i},v_{V}^{i},1)^{T }is the homogeneous coordinate of the mirror feature points on the flat glass target. Calculate the spacing between adjacent two feature points in the horizontal direction and the vertical direction separately by using the optimized parameters (R_{V}^{C},t_{V}^{C}). If the spacing between the two feature points is less than the set threshold (threshold is 0.008 mm), get the two feature points as candidate points. Determine the connection between the real image point and the mirror image point by the following formula:
l_{i}={tilde over (m)}_{C}^{i}×{tilde over (m)}_{V}^{i} (9)
The intersection of the two groups of connections is l=l_{i}×l_{j}(i≠j), i and j represent different lines. Due to the error in feature point extraction, each group of connections cannot be completely crossed. Solve the intersection of all connection lines as the image vanishing point by linear least squares. The objective function is:
Herein, v is the homogeneous coordinate of the image vanishing point.
The line structured light vision sensor consists of a camera and a laser.
Step 18: Extract the center point of the light stripe image and match the subpixels of the center point of the light stripe.
Step 181: The method for obtaining the center point of the light stripe image is as follows:
Extract the image coordinates of the center of all the light stripes in the captured light stripe image. Obtain undistorted image coordinates of the center of all the light stripes in the image by image distortion correction method. The method of extracting the center of the light stripe can use the method of “An unbiased detector of curvilinear structures” described by Steger. The correction method is described in “Machine Vision. Guangjun Zhang, Science Press.” The image coordinates of the center of the light stripe mentioned later are the distortionfree image coordinates after distortion correction.
Step 182: The subpixel matching method of the center point of the light stripe is as follows:
Pick a point (u_{C}^{i},v_{C}^{i}) on the real light stripe image which on the white printing paper. Connect the line of point (u_{C}^{i},v_{C}^{i}) and the image vanishing point as the polar line l_{ei }of the center of the real light stripe. Calculate the distance from each point on the mirror light stripe to the polar line, connect the nearest two points (u_{V}^{i},v_{V}^{i}) and (u_{V}^{i+1}, v_{V}^{i+1}) to get the formula:
The coordinates of the corresponding subpixel matching points are:
P_{i}=[l_{ei}]_{x}_{i} (12)
Herein, Pi is the matching point at the center of the mirrored light stripe corresponding to the center of the real light stripe. l_{ei}, l_{i }are vector representation of the line, [l_{ei}]_{x }is the antisymmetric matrix of l_{ei}, (u_{V}^{i},v_{V}^{i}) and (u_{V}^{i+1},v_{V}^{i+1}) are the image coordinates of any two adjacent points in the center of the mirrored light stripe image.
Step 19: Rebuild the center point of the real light stripe and the center point of the mirror light stripe in three dimensions by the virtual binocular measurement model, and fit the light plane to obtain the light plane parameters by least squares method.
Step 20: Evaluate the calibration accuracy of the structural light parameter calibration device and method provided by the present invention.
The evaluation method for the calibration accuracy of the light plane parameters of structured light is as follows:
Step 201: First, randomly select two points that are not adjacent on the center of the real light stripe image. Calculate the corresponding matching point on the mirror light stripe image according to the method mentioned in the Step 18 and the Step 19. As shown in
Step 202: Secondly, connect point p_{C}^{i}(u_{C}^{i},v_{C}^{i}) and the point p_{V}^{j}(u_{V}^{j},v_{V}^{j}) to obtain a straight line l_{A }connect point p_{C}^{j}(u_{C}^{j},v_{C}^{j}) and the point p_{V}^{i}(u_{V}^{i},v_{V}^{i}) to obtain a straight line l′_{A}, line l_{A }and line l′_{A }intersect at point p_{t}(u_{t}^{i},v_{t}^{i}), point p_{t}(u_{t}^{i},v_{t}^{i}) which named test point must be located on the frontcoating plane mirror under ideal conditions. Find the threedimensional coordinates of the space corresponding to the test point p_{t}(u_{t}^{i},v_{t}^{i}) as the true value. The image coordinates of the test point p_{t}(u_{t}^{i},v_{t}^{i}) are:
The threedimensional coordinates of the test point are:
(X_{t}^{i},Y_{t}^{i},Z_{t}^{i})=R_{C}^{m}H_{horm}^{−1}[u_{t}^{i},v_{t}^{i},1]^{T}+t_{C}^{m} (14)
Herein, H_{horm}^{−1 }is the inverse matrix of the homography matrix corresponding to the frontcoating plane mirror.
Step 203: Solve the threedimensional coordinates of the center point of the real light stripe in the camera coordinate system P_{C}^{i}, according to the internal parameters of the camera and a light plane equation obtained by calibration. The threedimensional coordinates of the matching points on the center of the mirror light stripe in the camera coordinate system are:
P_{V}^{i}=R_{V}^{C}P_{C}^{i}+t_{V}^{C} (15)
The measured value of the test point is:
(X_{m}^{i},Y_{m}^{i},Z_{m}^{i})^{T}=(P_{C}^{i}+sP_{V}^{j})/(1+s) (16)
Here, s=∥P_{C}^{i}−P_{V}^{i}∥/∥P_{c}^{j}−P_{V}^{j}∥, P_{C}^{j}=(X_{C}^{i},Y_{C}^{i},Z_{C}^{i})^{T }and P_{V}^{i}=(X_{V}^{i},Y_{V}^{i},Z_{V}^{i})^{T }are the threedimensional coordinates of the two points matching the real light stripe and the mirror light stripe in the camera coordinate system.
Step 204: The spatial distance between the ideal value and the measured value is:
Δd_{i}=√{square root over ((X_{m}^{i}−X_{t}^{i})^{2}+(Y_{m}^{i}−Y_{t}^{i})^{2}+(Z_{m}^{i}−Z_{t}^{i})^{2})} (17)
Calculate the RMS error of the threedimensional coordinates of the test point through two points that are not adjacent to each other in the center of multiple real light stripe images. Evaluate the calibration accuracy of the light plane of structured light by the RMS error.
The camera model is from Allied Vision Technologies, with a 23 mm focal length Schneider optical lens, the image resolution is 1600×1200 pixels. The field of view of the line structured light vision sensor is approximately 200 mm×160 mm. The measurement distance is 650 mm. The laser uses a singleline red laser with a wavelength of 635 nm (e.g., a wavelength of visible light in the unit of nm) and a power of 50 mW. The positional accuracy of the feature points in the frontcoating plane mirror is 2 um, and the positional accuracy of the feature points in the flat glass target is 2 um as shown in
First, according to the method described in step 11, the internal parameter calibration result of the camera is: α_{x}=5174.34; α_{y}=5175.03; γ=0; u_{0}=815.19; v_{0}=597.72; k_{1}=−0.19; k_{2}=0.58
Where, α_{x }is the scale factor of the u axis, α_{y }is the scale factor of the v axis, the scale factor is also called the effective focal length. γ is a nonperpendicular factor for the uaxis and the vaxis. u_{0 }and v_{0 }represent the optical center, also called the principal point coordinates; k_{1 }and k_{2 }are radial distortion parameters of lens.
According to the method described in step 12, the camera takes images of the frontcoating plane mirror and images of the flat glass target, and the images are shown in
According to the method described in step 13 and step 14, the obtained rotation matrix and translation vector between the coordinate system of the plane mirror and the camera coordinate system are respectively
The rotation matrix and translation vector between the coordinate system of the real camera and the coordinate system of the mirror camera are
According to the method described in step 15 and step 16, the obtained optimal solution of the rotation matrix and the optimal solution of the translation vector between the coordinate system of the plane mirror and the camera coordinate system is
The optimal solution of the rotation matrix and the optimal solution of the translation vector between the coordinate system of the real camera and the coordinate system of the mirror camera are
According to the method described in step 17, obtain the image vanishing point, as shown in
According to the method described in step 18, extract the center point of the light stripe and match the subpixels of the center point of the light stripe, as shown in
According to the method described in step 19, the equation of the light plane is X−0.1063Y+0.3541Z−238.7075=0
Coefficients of the equation of the light plane are a=1.0000, b=−0.1063, c=0.3541, d=−238.7075.
According to the method described in step 20, the calibration accuracy of the light plane of structured light is evaluated to be 0.027 mm.
In summary, the present invention provides feature points of micronlevel positional accuracy and a larger number of calibration points while ensuring the quality of the light stripe. Compared with conventional structured light parameter calibration methods, the device and method provided by the present invention have higher calibration accuracy and more stable calibration results.
The embodiments described above are only descriptions of preferred embodiments of the present invention, and do not intended to limit the scope of the present invention. Various variations and modifications can be made to the technical solution of the present invention by those of ordinary skills in the art, without departing from the design and spirit of the present invention. The variations and modifications should all fall within the claimed scope defined by the claims of the present invention.