×

Method and apparatus for dynamic measuring three-dimensional parameters of tire with laser vision

  • US 7,177,740 B1
  • Filed: 11/18/2005
  • Issued: 02/13/2007
  • Est. Priority Date: 11/10/2005
  • Status: Active Grant
First Claim
Patent Images

1. A method for dynamic measuring 3D parameters of a tire, the method comprising the steps of:

  • calibrating the model parameters of the measurement system, andperforming practical measurement of a tire;

    wherein calibrating the model parameters of the measurement system comprising the steps of;

    choosing a tangent plane on a transmitting roller as a datum plane and placing a measured tire on the datum plane;

    setting up two laser vision sensors, wherein respective projective light planes of the two sensors are each perpendicular to the datum plane and are perpendicular to each other;

    an optical axis of the laser projector of the first sensor being nearby to a geometric center of the measured tire, the light plane hitting the surface of the measured tire to form two feature-contours on the measured tire surface, an angle between an optical axis of the laser projector of the second sensor and the datum plane being from 30°

    to 60°

    , a first feature-contour being formed on the measured tire surfaceremoving the measured tire from the datum plane;

    providing a planar target with pre-established feature points for camera calibration, establishing the 3D local world coordinate frame on the target plane as follows;

    defining an upper-left corner as the origin;

    defining an x-axis as rightward;

    defining a y-axis as downward, and defining the z-axis as particular to the target plane;

    determining local world coordinates of the calibration feature points and saving them in a computer;

    moving the planar target freely and non-parrallelly to at least three positions in a field of view of the first sensors, the sensor taking one image each time and saving each image to the computer;

    extracting the image coordinates of the feature points andsaving the detected image coordinates and the corresponding local world coordinates of the feature points to the computer;

    calibrating intrinsic parameters of the camera of the first sensor with the image coordinates and the corresponding local world coordinates of the feature points;

    calibrating the intrinsic parameters of the camera of the second sensor with the same procedures as the first sensor;

    Choosing four vertices and the center of the square as the principal feature points;

    placing the square planar target on the datum plane so that two sensors can simultaneously observe the square on the target;

    establishing a global world coordinate frame on the target plane as follows;

    defining the center of the square as the origin;

    defining the x-axis and y-axis as parallel to two sides of the square respectively, and defining the z-axis as upward and perpendicular to the target plane;

    keeping the target unmoved, taking a first image at the first sensor and save it to the computer;

    according to the distortion model of the camera, correcting the distortion of the taken image yielding the distortion-free image is obtained;

    extracting the image coordinates of five principal feature points of the square planar target in the distortion-free taken image;

    obtaining more secondary feature points with known the image coordinates and the corresponding global world coordinates on the target by the “

    invariance of cross-ratio”

    principle;

    calculating the transformation from the first camera coordinate frame related to the first sensor to the global world coordinate frame with those known feature points on the square planar target;

    keeping the target unmoved, calculating the transformation from the second camera coordinate frame related to the second sensor to the global world coordinate frame with the same procedures as the first sensor;

    establishing multiple local world coordinate frames on the target plane respectively with the same method as the global world coordinate frame when the square planar target is moved to a different position;

    freely moving the square planar target to at least two positions in the field of view of the first sensor, the first sensor taking one image each time and saving it to the computer, the square on the target plane and a feature light stripe formed by intersection line between the projective light plane and the target plane being completely contained in the taken images;

    according to the distortion model of the first sensor, correcting the distortion of the taken images yielding distortion-free images;

    after extracting the image coordinates of five principal feature points of the square planar target in the distort-free taken image, obtaining more secondary feature points on the target with image coordinates and the corresponding local world coordinates by the “

    invariance of cross-ratio”

    principle;

    calculating the transformation from the local world coordinate frame to the first camera coordinate frame related to the first sensor with those known feature points on the square planar target;

    defining as control points the intersection points between the feature light stripe and the diagonals of the square;

    after extracting the image coordinates of the control points lying on the light plane in the distortion-free taken images, calculating the local world coordinates of the control points by the “

    invariance of cross-ratio”

    principle;

    according to the transformation from the local world coordinate frame to the first camera coordinate frame and the transformation from the camera coordinate frame to the global world coordinate frame, obtaining the camera coordinates and global coordinates of the control points from the image coordinates;

    obtaining by a nonlinear least-squares method the light plane equation of the first sensor in the global world coordinate frame by fitting the known control points lying on the light plane;

    obtaining the equation of the second light plane the second sensor in the global world coordinate frame with the same procedures as the first light plane with respect to the first sensor;

    obtaining the equation of the datum line of each sensor in the global world coordinate frame by calculating the intersection line between the light plane and datum plane respectively; and

    saving the model parameters of the measurement system including the intrinsic parameters of the camera, the equation of the light plane, the equation of the datum line and the transformation from the camera coordinate frame to the global world coordinate frame to the computer.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×