×

One method of binocular depth perception based on active structured light

  • US 10,142,612 B2
  • Filed: 01/07/2015
  • Issued: 11/27/2018
  • Est. Priority Date: 02/13/2014
  • Status: Active Grant
First Claim
Patent Images

1. A method of binocular depth perception based on active structured light, comprising the following steps of:

  • Step 1;

    projecting coherent laser beams, by a coded pattern projector, with a coded pattern to carry out structured light coding for a target object with an unknown depth;

    Step 2;

    arranging a first camera and a second camera symmetrically at the same distances on the left side and right side of the coded pattern projector to acquire and fix their respective reference coded pattern Rl and reference coded pattern Rr, the first camera and the second camera being two separate and distinct components and each having the same or substantially the same optical lens and image sensor, and sharing the same baseline with the coded pattern projector and receiving the coded pattern within the range of a wavelength;

    Step 3;

    acquiring input image Il, by the first camera, and acquiring input image Ir, by the second camera, each of the input image Il and the input image Ir containing the coded pattern and the target object and preprocessing the input images Il and Ir, wherein the preprocessing includes video format conversion, color space conversion, and grey image adaptive denoising and enhancement;

    Step 4;

    using the input image Il and the input image lr after being preprocessed to detect projection shadow areas Al and Ar of the target object respectively, wherein projection shadow area Ar located behind the left side of the target object is detected in the input image Il and projection shadow area Al located behind the right side the target object is detected in the input image Ir;

    Step 5;

    performing two matching motion estimation;

    a first block matching motion estimation based on the symmetric arrangements and equal distances of the first camera and the second camera from the coded pattern projector and a second of block matching motion estimation to generate the offset respectively, wherein the first block matching motion estimation is to perform a binocular block matching calculation between a first input image block of the input image Il and a corresponding matching image block of the input image Ir based on the symmetric arrangements and equal distances of the first camera and the second camera from the projector and get an X-axis offset Δ

    xl,r or a Y-axis offset Δ

    y l, r; and

    the second block matching motion estimation is to perform (1) a first block matching calculation between the first input image block of the input image Il image and a corresponding matching image block with the reference coded pattern Rl to get an X-axis offset Δ

    xl and a Y-axis offset Δ

    yl and (2) a second block matching calculation between a second input image block of the input image Ir and a corresponding matching image block with the reference coded pattern Rr to get an X-axis offset Δ

    xr or a Y-axis offset Δ

    yr, wherein the block matching motion estimation is based on similarity values between input images and corresponding matching images;

    Step 6;

    carrying out depth calculation, including;

    (6a) selecting the X-axis offset Δ

    xl, ror Δ

    y l,r and combining the focal length f of the image sensor, the baseline distance between the first camera and the second camera S and a dot pitch parameter μ

    of the image sensor to obtain depth information d l ,r for a central point 0 of an image block mxn;

    (6b) selecting the X-axis offset Δ

    xl and Δ

    xr or the Y-axis offset Δ

    yl and Δ

    yr and combining a given distance parameter d of the reference coded pattern Rl and reference coded pattern Rr, the focal length f of the image sensor, the baseline distance s between the first camera and the coded pattern projector, as well as the dot pitch parameter μ

    of the image sensor to obtain depth information dl and d r respectively for the central point 0 of the image blockmxn corresponding to the same position in each of the input image Il and the input image Ir;

    Step 7;

    performing depth compensation, including, using the depth information dl and d r, combining the projection shadow areas Al and Ar detected in Step 4 to compensate and correct the depth information dl,r , and outputting a final depth value dout of the central point 0 on the image block mxn;

    Step 8;

    moving the central point 0 of the image blockmxn to a next pixel in the same line, repeating the steps 5-7 to calculate a depth value corresponding to the next pixel and following such calculation sequence from left to right and from top to bottom line by line to obtain the depth information of the input image Il and the input image Ireach comprising the target object based on point-by-point calculation.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×