Rice aphid detection method based on antagonistic characteristic learning

Rice aphid detection method based on antagonistic characteristic learning

  • CN 107,194,418 B
  • Filed: 05/10/2017
  • Issued: 09/28/2021
  • Est. Priority Date: 05/10/2017
  • Status: Active Grant
First Claim
Patent Images

1. A rice aphid detection method based on antagonistic characteristic learning is characterized by comprising the following steps:

  • 11) collecting and preprocessing rice aphid images, collecting a plurality of rice aphid images as training images, focusing the collected images on the aphid body part, and normalizing the sizes of all the training images into 16 multiplied by 16 pixels to obtain a plurality of aphid image training samples;

    12) acquiring a rice aphid image detection model, constructing and countertraining an image discrimination network and an image generation network under condition constraint, extracting aphid countercharacteristics according to the trained image discriminator network, and training the aphid detection model according to aphid image countercharacteristic vectors;

    the method for obtaining the rice aphid image detection model comprises the following steps;

    121) constructing an image discrimination network model D (x, l) with conditional constraint, wherein l is pl (l) and represents conditional constraint distribution;

    the image discrimination network model sets the number of network layers to be 5 on the basis of a deep convolutional neural network model, wherein the first 3 layers are convolutional layers, the 4 th layer is a full-connection layer, the last layer is an output layer, and the number of nodes of the output layer is 1;

    the input of the method is an image with the size of 16 multiplied by 16 pixels, and the class probability of the image is output through a softmax classifier;

    122) constructing an image generation network model G (z, l) with conditional constraint, wherein z is pz (z) to represent Gaussian noise distribution, and l is pl (l) to represent conditional constraint distribution and set as illumination distribution or aphid posture distribution;

    the image generation network model is based on a deep convolution neural network model, the number of network layers is set to be 4, wherein the first 3 layers are deconvolution layers, the last layer is an output layer, the number of nodes of the output layer is 16 multiplied by 16, and the input of the output layer is a multidimensional random number which accords with conditional constraint distribution;

    123) the method comprises the following specific steps of;

    1231) carrying out countermeasure training on the image discrimination network model D (x, l) and the image generation network model G (z, l), wherein the training models are as follows;



    wherein;

    log () is a logarithmic function, and x, l is pdata (x, l) which is a plurality of aphid image training samples and aphid training samples with illumination or aphid posture transformation respectively;

    x∈

    Rdx

    l∈

    Rdldx, dl are the dimensions of the training samples;

    pz (z) represents a Gaussian noise distribution N (mu, sigma ^2), wherein mu and sigma ^2 are parameters of the distribution and are respectively expectation and variance of the Gaussian distribution;

    pll (l) represents a condition constraint distribution N (alpha, delta ^2), wherein alpha and delta ^2 are distribution parameters and are set as illumination distribution or aphid postures;

    d (x, l) is an image discrimination network model;

    g (z, l) is an image generation network model;

    1232) adjusting parameters of D (x, l);

    aphid image sample and noise sample distribution with m random extractions, xiFor the ith aphid image sample, liThe ith noise distribution corresponds to the ith aphid image sample;

    during the training, D (x)i,li) The image is displayed as a real rice aphid image, and the output value is lower by adjusting the parameters of the image;

    the parameters are adjusted by calculating the output error of the discrimination network,so that the error reaches a threshold value epsilonD

    D(xi,li) Is shown as a slave G(zi,li) The output D (G (z)) of the aphid image is obtained by adjusting the parameters of the aphid imagei,li),li) Is larger;

    the parameters of D (x, l) are adjusted by calculating the output error of the generated network, and the formula is as follows;

    so that the error reaches a threshold value epsilonG

    124) Collecting and preprocessing negative samples of the rice aphid images, collecting a plurality of non-rice aphid images as training images, focusing the collected images on image areas outside the aphid bodies, and normalizing the sizes of all the training negative sample images into 16 multiplied by 16 pixels to obtain a plurality of negative samples;

    125) extracting the antagonistic characteristics of positive and negative samples of the rice aphid image,inputting an aphid image training sample and a negative sample thereof into a learned image discrimination network model D (x, l) with conditional constraints, and outputting the layer 4 of a deep convolutional neural network of the image discrimination network model D (x, l) as the confrontation characteristic of the positive and negative training samples of the aphid rice;

    126) collecting antagonistic characteristics of positive and negative sample images of the aphid image to form an antagonistic characteristic vector;

    127) training the confrontation feature vector by an SVM classifier to obtain a rice aphid image detection model;

    13) collecting and preprocessing a rice image to be detected, acquiring the image to be detected, and normalizing the size of the detected rice image into 256 multiplied by 256 pixels to obtain the image to be detected;

    14) marking specific positions of aphids in the image, inputting the image to be detected into the trained rice aphid image detection model, detecting the rice aphids, and positioning and marking the specific positions of the aphids in the image.

View all claims
    ×
    ×

    Thank you for your feedback

    ×
    ×