Method for recognizing object images and learning method for neural networks
First Claim
1. A method for discriminating an image, wherein a judgment is made as to whether a given image is or is not a predetermined image, the method comprising the steps of:
- i) extracting a reference point, which is unaffected by a change in the angle of the given image and/or by rotation of the given image, from the given image, ii) detecting an axis of symmetry and/or feature parts of the given image in accordance with the reference point, and iii) making a judgment as to whether the given image is or is not a predetermined image, the judgment being made in accordance with the axis of symmetry and/or the feature parts of the given image.
1 Assignment
0 Petitions
Accused Products
Abstract
A method for recognizing an object image comprises the steps of extracting a candidate for a predetermined object image from an overall image, and making a judgment as to whether the extracted candidate for the predetermined object image is or is not the predetermined object image. The candidate for the predetermined object image is extracted by causing the center point of a view window, which has a predetermined size, to travel to the position of the candidate for the predetermined object image, and determining an extraction area in accordance with the size and/or the shape of the candidate for the predetermined object image, the center point of the view window being taken as a reference during the determination of the extraction area. A learning method for a neural network comprises the steps of extracting a target object image, for which learning operations are to be carried out, from an image, feeding a signal, which represents the extracted target object image, into a neural network, and carrying out the learning operations of the neural network in accordance with the input target object image.
99 Citations
95 Claims
-
1. A method for discriminating an image, wherein a judgment is made as to whether a given image is or is not a predetermined image, the method comprising the steps of:
-
i) extracting a reference point, which is unaffected by a change in the angle of the given image and/or by rotation of the given image, from the given image, ii) detecting an axis of symmetry and/or feature parts of the given image in accordance with the reference point, and iii) making a judgment as to whether the given image is or is not a predetermined image, the judgment being made in accordance with the axis of symmetry and/or the feature parts of the given image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95)
1) extracting a center point between candidates for eye patterns as the reference point, which is unaffected by a change in the angle of the given image and/or by rotation of the given image, from the given image, 2) detecting an axis of symmetry, which passes through the center point between the candidates for eye patterns, in accordance with the extracted center point between the candidates for eye patterns, 3) detecting the feature parts of the given image in accordance with the axis of symmetry, and 4) making a judgment as to whether the given image is or is not a face image, the judgment being made in accordance with information about the center point between the candidates for eye patterns, the axis of symmetry, and/or the feature parts of the given image.
-
-
6. A method for discriminating an image as defined in claim 5 wherein the extraction of the center point between candidates for eye patterns is carried out by:
-
a) detecting components, which easily match with shapes of eye patterns, from the given image, b) emphasizing the components, which are among the components easily matching with the shapes of eye patterns and which are located at positions in the vicinity of the center point of the given image, c) detecting straight line components of a contour, which are tilted in a plurality of directions, from the given image, d) combining the detected straight line components, contour components of the given image being thereby detected, e) removing the detected contour components from the components, which have been obtained by emphasizing the components located at positions in the vicinity of the center point of the given image, and f) extracting a center point between two components, which stand in a line along a predetermined direction, from the components, which have been obtained by removing the contour components.
-
-
7. A method for discriminating an image as defined in claim 6 wherein the detection of the components, which easily match with the shapes of eye patterns, is carried out by transmitting the given image as signals weighted with synaptic weights patterns for detecting eye patterns, which synaptic weights patterns have been calculated in accordance with a DOG function, and
the detection of the straight line components of the contour, which are tilted in a plurality of directions, is carried out by transmitting the given image as signals weighted with synaptic weights patterns for detecting straight lines, which synaptic weights patterns have been calculated in accordance with a Gabor function. -
8. A method for discriminating an image as defined in claim 7 wherein the sizes of receptive fields of the synaptic weights patterns for detecting straight lines are set such that the synaptic weights patterns may easily make a response to the straight line components of the contour and may make little response to the components, which easily match with the shapes of eye patterns.
-
9. A method for discriminating an image as defined in claim 8 wherein the feature parts of the given image include a candidate for a contour of a face pattern and/or a candidate for a mouth pattern region.
-
10. A method for discriminating an image as defined in claim 9 wherein the detection of the candidate for the contour of a face pattern is carried out by:
-
detecting the contour components, which are contained in the given image, from the given image by taking the axis of symmetry as reference, comparing the detected contour components with contours of a plurality of face patterns directed to different directions, the contours having been learned as templates in advance, and making a judgment as to whether components corresponding to the detected contour components are or are not included in the contours of face patterns, which have been learned as templates.
-
-
11. A method for discriminating an image as defined in claim 10 wherein the learning of the contours of face patterns is carried out by:
-
feeding the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns into a large number of cells of a neural network, causing a cell, which best matches with the contour information having been fed into the neural network, to learn said contour information, for neighboring cells that fall within a neighboring region having a predetermined range and neighboring with the cell, which best matches with the contour information having been fed into the neural network, carrying out spatial interpolating operations from the contour information, which has been fed into the neural network, and contour information, which is other than the contour information having been fed into the neural network and which has been learned by a cell that is among the large number of the cells of the neural network and that is other than the cell best matching with the contour information having been fed into the neural network, and thereby carrying out the self-organizing learning operations on information about contours of a large number of face patterns directed to different directions.
-
-
12. A method for discriminating an image as defined in claim 11 wherein the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns is obtained by averaging the information about contours of a plurality of face patterns.
-
13. A method for discriminating an image as defined in claim 12 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
14. A method for discriminating an image as defined in claim 13 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
15. A method for discriminating an image as defined in claim 12 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
16. A method for discriminating an image as defined in claim 11 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
17. A method for discriminating an image as defined in claim 16 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
18. A method for discriminating an image as defined in claim 11 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
19. A method for discriminating an image as defined in claim 10 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
20. A method for discriminating an image as defined in claim 9 wherein the detection of the candidate for the mouth pattern region is carried out by:
-
transforming the given image to a YIQ base, and detecting the components, which match with the shape of the mouth pattern most easily in a Q component image that is among the image having been transformed to the YIQ base, said components being detected within a predetermined range with reference to the axis of symmetry and/or the contour components of the given image.
-
-
21. A method for discriminating an image as defined in claim 20 wherein the detection of the candidate for the mouth pattern region is carried out by transmitting the Q component image, which has been transformed with the polar coordinates transformation by taking the center point between eye patterns as the pole, as a signal weighted with a synaptic weights pattern for detecting the mouth pattern region, which synaptic weights pattern has been calculated in accordance with a DOG function.
-
22. A method for discriminating an image as defined in claim 21 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
23. A method for discriminating an image as defined in claim 20 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
24. A method for discriminating an image as defined in claim 9 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
25. A method for discriminating an image as defined in claim 8 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
26. A method for discriminating an image as defined in claim 7 wherein the feature parts of the given image include a candidate for a contour of a face pattern and/or a candidate for a mouth pattern region.
-
27. A method for discriminating an image as defined in claim 26 wherein the detection of the candidate for the contour of a face pattern is carried out by:
-
detecting the contour components, which are contained in the given image, from the given image by taking the axis of symmetry as reference, comparing the detected contour components with contours of a plurality of face patterns directed to different directions, the contours having been learned as templates in advance, and making a judgment as to whether components corresponding to the detected contour components are or are not included in the contours of face patterns, which have been learned as templates.
-
-
28. A method for discriminating an image as defined in claim 27 wherein the learning of the contours of face patterns is carried out by:
-
feeding the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns into a large number of cells of a neural network, causing a cell, which best matches with the contour information having been fed into the neural network, to learn said contour information, for neighboring cells that fall within a neighboring region having a predetermined range and neighboring with the cell, which best matches with the contour information having been fed into the neural network, carrying out spatial interpolating operations from the contour information, which has been fed into the neural network, and contour information, which is other than the contour information having been fed into the neural network and which has been learned by a cell that is among the large number of the cells of the neural network and that is other than the cell best matching with the contour information having been fed into the neural network, and thereby carrying out the self-organizing learning operations on information about contours of a large number of face patterns directed to different directions.
-
-
29. A method for discriminating an image as defined in claim 28 wherein the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns is obtained by averaging the information about contours of a plurality of face patterns.
-
30. A method for discriminating an image as defined in claim 29 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
31. A method for discriminating an image as defined in claim 30 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
32. A method for discriminating an image as defined in claim 29 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
33. A method for discriminating an image as defined in claim 28 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
34. A method for discriminating an image as defined in claim 33 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
35. A method for discriminating an image as defined in claim 28 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
36. A method for discriminating an image as defined in claim 27 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
37. A method for discriminating an image as defined in claim 26 wherein the detection of the candidate for the mouth pattern region is carried out by:
-
transforming the given image to a YIQ base, and detecting the components, which match with the shape of the mouth pattern most easily in a Q component image that is among the image having been transformed to the YIQ base, said components being detected within a predetermined range with reference to the axis of symmetry and/or the contour components of the given image.
-
-
38. A method for discriminating an image as defined in claim 37 wherein the detection of the candidate for the mouth pattern region is carried out by transmitting the Q component image, which has been transformed with the polar coordinates transformation by taking the center point between eye patterns as the pole, as a signal weighted with a synaptic weights pattern for detecting the mouth pattern region, which synaptic weights pattern has been calculated in accordance with a DOG function.
-
39. A method for discriminating an image as defined in claim 38 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
40. A method for discriminating an image as defined in claim 37 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
41. A method for discriminating an image as defined in claim 26 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
42. A method for discriminating an image as defined in claim 7 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
43. A method for discriminating an image as defined in claim 6 wherein the feature parts of the given image include a candidate for a contour of a face pattern and/or a candidate for a mouth pattern region.
-
44. A method for discriminating an image as defined in claim 43 wherein the detection of the candidate for the contour of a face pattern is carried out by:
-
detecting the contour components, which are contained in the given image, from the given image by taking the axis of symmetry as reference, comparing the detected contour components with contours of a plurality of face patterns directed to different directions, the contours having been learned as templates in advance, and making a judgment as to whether components corresponding to the detected contour components are or are not included in the contours of face patterns, which have been learned as templates.
-
-
45. A method for discriminating an image as defined in claim 44 wherein the learning of the contours of face patterns is carried out by:
-
feeding the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns into a large number of cells of a neural network, causing a cell, which best matches with the contour information having been fed into the neural network, to learn said contour information, for neighboring cells that fall within a neighboring region having a predetermined range and neighboring with the cell, which best matches with the contour information having been fed into the neural network, carrying out spatial interpolating operations from the contour information, which has been fed into the neural network, and contour information, which is other than the contour information having been fed into the neural network and which has been learned by a cell that is among the large number of the cells of the neural network and that is other than the cell best matching with the contour information having been fed into the neural network, and thereby carrying out the self-organizing learning operations on information about contours of a large number of face patterns directed to different directions.
-
-
46. A method for discriminating an image as defined in claim 45 wherein the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns is obtained by averaging the information about contours of a plurality of face patterns.
-
47. A method for discriminating an image as defined in claim 46 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
48. A method for discriminating an image as defined in claim 47 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
49. A method for discriminating an image as defined in claim 46 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
50. A method for discriminating an image as defined in claim 45 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
51. A method for discriminating an image as defined in claim 50 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
52. A method for discriminating an image as defined in claim 45 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
53. A method for discriminating an image as defined in claim 44 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
54. A method for discriminating an image as defined in claim 43 wherein the detection of the candidate for the mouth pattern region is carried out by:
-
transforming the given image to a YIQ base, and detecting the components, which match with the shape of the mouth pattern most easily in a Q component image that is among the image having been transformed to the YIQ base, said components being detected within a predetermined range with reference to the axis of symmetry and/or the contour components of the given image.
-
-
55. A method for discriminating an image as defined in claim 54 wherein the detection of the candidate for the mouth pattern region is carried out by transmitting the Q component image, which has been transformed with the polar coordinates transformation by taking the center point between eye patterns as the pole, as a signal weighted with a synaptic weights pattern for detecting the mouth pattern region, which synaptic weights pattern has been calculated in accordance with a DOG function.
-
56. A method for discriminating an image as defined in claim 55 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
57. A method for discriminating an image as defined in claim 54 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
58. A method for discriminating an image as defined in claim 43 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
59. A method for discriminating an image as defined in claim 6 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
60. A method for discriminating an image as defined in claim 5 wherein the detection of the axis of symmetry is carried out by:
-
a) transforming the given image with the polar coordinates transformation by taking the center point between candidates for eye patterns as a pole, b) setting a temporary axis of symmetry in the given image, which has been transformed with the polar coordinates transformation, c) moving the temporary axis of symmetry by an angle within a predetermined range in the given image, which has been transformed with the polar coordinates transformation, the degree of correlation between two regions in the given image, which are divided by the moved temporary axis of symmetry, being thereby calculated, and d) taking the temporary axis of symmetry, which is associated with the highest degree of correlation, as the axis of symmetry.
-
-
61. A method for discriminating an image as defined in claim 60 wherein the feature parts of the given image include a candidate for a contour of a face pattern and/or a candidate for a mouth pattern region.
-
62. A method for discriminating an image as defined in claim 61 wherein the detection of the candidate for the counter of a face pattern is carried out by:
-
detecting the contour components, which are contained in the given image, from the given image by taking the axis of symmetry as reference, comparing the detected contour components with contours of a plurality of face patterns directed to different directions, the contours having been learned as templates in advance, and making a judgment as to whether components corresponding to the detected contour components are or are not included in the contours of face patterns, which have been learned as templates.
-
-
63. A method for discriminating an image as defined in claim 62 wherein the learning of the contours of face patterns is carried out by:
-
feeding the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns into a large number of cells of a neural network, causing a cell, which best matches with the contour information having been fed into the neural network, to learn said contour information, for neighboring cells that fall within a neighboring region having a predetermined range and neighboring with the cell, which best matches with the contour information having been fed into the neural network, carrying out spatial interpolating operations from the contour information, which has been fed into the neural network, and contour information, which is other than the contour information having been fed into the neural network and which has been learned by a cell that is among the large number of the cells of the neural network and that is other than the cell best matching with the contour information having been fed into the neural network, and thereby carrying out the self-organizing learning operations on information about contours of a large number of face patterns directed to different directions.
-
-
64. A method for discriminating an image as defined in claim 63 wherein the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns is obtained by averaging the information about contours of a plurality of face patterns.
-
65. A method for discriminating an image as defined in claim 64 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
66. A method for discriminating an image as defined in claim 65 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
67. A method for discriminating an image as defined in claim 64 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
68. A method for discriminating an image as defined in claim 63 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
69. A method for discriminating an image as defined in claim 68 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
70. A method for discriminating an image as defined in claim 63 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
71. A method for discriminating an image as defined in claim 62 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
72. A method for discriminating an image as defined in claim 61 wherein the detection of the candidate for the mouth pattern region is carried out by:
-
transforming the given image to a YIQ base, and detecting the components, which match with the shape of the mouth pattern most easily in a Q component image that is among the image having been transformed to the YIQ base, said components being detected within a predetermined range with reference to the axis of symmetry and/or the contour components of the given image.
-
-
73. A method for discriminating an image as defined in claim 72 wherein the detection of the candidate for the mouth pattern region is carried out by transmitting the Q component image, which has been transformed with the polar coordinates transformation by taking the center point between eye patterns as the pole, as a signal weighted with a synaptic weights pattern for detecting the mouth pattern region, which synaptic weights pattern has been calculated in accordance with a DOG function.
-
74. A method for discriminating an image as defined in claim 73 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
75. A method for discriminating an image as defined in claim 72 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
76. A method for discriminating an image as defined in claim 61 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
77. A method for discriminating an image as defined in claim 60 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
78. A method for discriminating an image as defined in claim 5 wherein the feature parts of the given image include a candidate for a contour of a face pattern and/or a candidate for a mouth pattern region.
-
79. A method for discriminating an image as defined in claim 78 wherein the detection of the candidate for the contour of a face pattern is carried out by:
-
detecting the contour components, which are contained in the given image, from the given image by taking the axis of symmetry as reference, comparing the detected contour components with contours of a plurality of face patterns directed to different directions, the contours having been learned as templates in advance, and making a judgment as to whether components corresponding to the detected contour components are or are not included in the contours of face patterns, which have been learned as templates.
-
-
80. A method for discriminating an image as defined in claim 79 wherein the learning of the contours of face patterns is carried out by:
-
feeding the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns into a large number of cells of a neural network, causing a cell, which best matches with the contour information having been fed into the neural network, to learn said contour information, for neighboring cells that fall within a neighboring region having a predetermined range and neighboring with the cell, which best matches with the contour information having been fed into the neural network, carrying out spatial interpolating operations from the contour information, which has been fed into the neural network, and contour information, which is other than the contour information having been fed into the neural network and which has been learned by a cell that is among the large number of the cells of the neural network and that is other than the cell best matching with the contour information having been fed into the neural network, and thereby carrying out the self-organizing learning operations on information about contours of a large number of face patterns directed to different directions.
-
-
81. A method for discriminating an image as defined in claim 80 wherein the information about contours of upward-, downward-, leftward-, rightward-, and front-directed face patterns is obtained by averaging the information about contours of a plurality of face patterns.
-
82. A method for discriminating an image as defined in claim 81 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
83. A method for discriminating an image as defined in claim 82 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
84. A method for discriminating an image as defined in claim 81 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
85. A method for discriminating an image as defined in claim 80 wherein the self-organizing learning operations are carried out by feeding the information about the contour of the front-directed face pattern, which has been created by carrying out transformation with polar coordinates transformation, the center point between eye patterns in the information about the contour of the face pattern being taken as a pole, and the information about the contours of the upward-, downward-, leftward-, and rightward-directed face patterns, which has been created by carrying out the transformation with the polar coordinates transformation, the pole being moved upwardly, downwardly, leftwardly, and rightwardly, into the neural network, and
the judgment as to whether components corresponding to the detected contour components of the given image are or are not included in the contours of face patterns, which have been learned as templates, is made by transforming the contour components of the given image with the polar coordinates transformation, in which the axis of symmetry is taken as reference and the center point between candidates for eye patterns is taken as the pole, and thereafter making a judgment as to whether the contour components of the given image transformed with the polar coordinates transformation are or are not contained in the results of the self-organizing learning operations. -
86. A method for discriminating an image as defined in claim 85 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
87. A method for discriminating an image as defined in claim 80 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
88. A method for discriminating an image as defined in claim 79 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
89. A method for discriminating an image as defined in claim 78 wherein the detection of the candidate for the mouth pattern region is carried out by:
-
transforming the given image to a YIQ base, and detecting the components, which match with the shape of the mouth pattern most easily in a Q component image that is among the image having been transformed to the YIQ base, said components being detected within a predetermined range with reference to the axis of symmetry and/or the contour components of the given image.
-
-
90. A method for discriminating an image as defined in claim 89 wherein the detection of the candidate for the mouth pattern region is carried out by transmitting the Q component image, which has been transformed with the polar coordinates transformation by taking the center point between eye patterns as the pole, as a signal weighted with a synaptic weights pattern for detecting the mouth pattern region, which synaptic weights pattern has been calculated in accordance with a DOG function.
-
91. A method for discriminating an image as defined in claim 90 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
92. A method for discriminating an image as defined in claim 89 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
93. A method for discriminating an image as defined in claim 78 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
-
94. A method for discriminating an image as defined in claim 5 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value, when u>
0, judging that the given image is a face image, andwhen u<
0, judging that the given image is not a face image.
-
-
95. A method for discriminating an image as defined in claim 4 wherein the judgment as to whether the given image is or is not a face image is made by:
-
carrying out a calculation represented by the formula
where yi (i=1 to n, wherein n represents the number of pieces of information) represents the response value of the center point between candidates for eye patterns, the correlation value of the axis of symmetry, and the value of information concerning the feature parts, wi (i=1 to n, wherein n represents the number of pieces of information) represents the weight of connection determined in accordance with the degree of importance of each of said values of the information, and th represents the threshold value,when u>
0, judging that the given image is a face image, andwhen u≦
0, judging that the given image is not a face image.
-
Specification