Method for learning by a neural network including extracting a target object image for which learning operations are to be carried out
First Claim
1. A learning method for a neural network, comprising the steps of:
- automatically determining the location of a target object image within an image for which learning operations are to be carried out;
extracting said target object image from said image, iii) feeding a signal, which represents the extracted target object image, into a neural network, and iv) carrying out the learning operations of said neural network in accordance with said input target object image.
1 Assignment
0 Petitions
Accused Products
Abstract
A method for recognizing an object image comprises the steps of extracting a candidate for a predetermined object image from an image, and making a judgment as to whether the extracted candidate for the predetermined object image is or is not the predetermined object image. The candidate for the predetermined object image is extracted by causing the center point of a view window, which has a predetermined size, to travel to the position of the candidate for the predetermined object image, and determining an extraction area in accordance with the size and/or the shape of the candidate for the predetermined object image, the center point of the view window being taken as a reference during the determination of the extraction area. A learning method for a neural network comprises the steps of extracting a target object image, for which learning operations are to be carried out, from an image, feeding a signal, which represents the extracted target object image, into a neural network, and carrying out the learning operations of the neural network in accordance with the input target object image.
-
Citations
29 Claims
-
1. A learning method for a neural network, comprising the steps of:
-
automatically determining the location of a target object image within an image for which learning operations are to be carried out;
extracting said target object image from said image, iii) feeding a signal, which represents the extracted target object image, into a neural network, and iv) carrying out the learning operations of said neural network in accordance with said input target object image. - View Dependent Claims (2)
a) cutting out a first image, which falls in a region inside of a view window having a predetermined size, from said image, b) detecting a contour line of an object, which is embedded in said cut-out first image, c) after a predetermined time has elapsed, cutting out a second image, which falls in the region inside of said view window, from said image, d) detecting a contour line of an object, which is embedded in said cut-out second image, e) calculating the difference between said contour line, which has been detected from said first image, and said contour line, which has been detected from said second image, f) detecting a movement of a background from said calculated difference, g) subtracting said detected movement of said background from said image, an object, which shows a movement different from the movement of said background, being thereby detected, h) recognizing said object, which shows a movement different from the movement of said background, as said target object image, i) detecting a vector directed towards said target object image as a first travel vector, j) detecting a contour line of said target object image, which line extends in a predetermined direction, from said cut-out first image, k) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, l) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as second azimuth vectors, m) composing a vector from said second azimuth vectors, a second travel vector being thereby determined, n) extracting a region, which approximately coincides in color with said target object image, from said cut-out first image, o) detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, p) detecting said azimuth and said distance as a third travel vector, q) composing a vector from said first, second, and third travel vectors, the composed vector being taken as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, r) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, s) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and t) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area.
-
-
3. A learning method for a neural network, comprising the steps of:
-
i) automatically extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image, for which learning operations are to be carried out, is carried out by causing the center point of a view window, which has a predetermined size, to travel to the position of said target object image, and determining the dimensions of an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window being taken as a reference during the determination of said extraction area.
-
-
4. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by cutting out an image, which falls in a region inside of a view window having a predetermined size, from said image, detecting a contour line of said target object image, which line extends in a predetermined direction, from said cut-out image, extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as azimuth vectors, composing a vector from said azimuth vectors, a vector for the travel of said view window being thereby determined, causing the center point of said view window to travel in accordance with said vector for the travel of said view window, and determining an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area. - View Dependent Claims (5)
the extraction of said components of said detected contour line is carried out by extracting all of contour line components, which are tilted at a predetermined angle with respect to an annular direction in the complex-log mapped plane, from the contour line, which has been detected in said complex-log mapped image, and said azimuth vectors are detected by detecting azimuths and intensities of the extracted contour line components in said complex-log mapped plane.
-
-
6. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
cutting out an image, which falls in a region inside of a view window having a predetermined size, from said image, extracting a region, which approximately coincides in color with said target object image, from said cut-out image, detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, detecting from said azimuth and said distance a vector for a travel of said view window, causing the center point of said view window to travel in accordance with said vector for the travel of said view window, and determining an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area. - View Dependent Claims (7, 8)
a region, which exhibits a high degree of coincidence in color with said target object image, and a region, which exhibits a low degree of coincidence in color with said target object image and is located at a position spaced apart from said region exhibiting a high degree of coincidence in color with said target object image, are caused to compete with each other, said region, which exhibits a low degree of coincidence in color with said target object image, being thereby erased, regions, which exhibit a high degree of coincidence in color with said target object image and are located at positions spaced apart from each other, are caused to compete with each other, a region exhibiting a high degree of coincidence in color with said target object image, which region has a size and a shape appropriate for the region to be selected, is kept unerased, whereas a region exhibiting a high degree of coincidence in color with said target object image, which region has a size and a shape inappropriate for the region to be selected, is erased, whereby a region, which is most appropriate in the region inside of said view window, is selected as a target object image region, and an azimuth and a distance of said selected object image region are detected with respect to the center point of said view window.
-
-
9. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) setting a view window, which has a predetermined size, on said image, cutting out a plurality of images, which fall in a region inside of said view window, at a plurality of times having a predetermined time difference therebetween, detecting contour lines of object images, which are embedded in the plurality of said cut-out images, calculating the difference between images, which represent said detected contour lines, and detecting a movement of said image in an in-plane parallel direction in the region inside of said view window, the movement being detected from said calculated difference, b) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, said contour lines extending in a radial direction with respect to the center point of said view window, calculating the difference between images, which represent said detected contour lines extending in the radial direction, and detecting a movement of said image in an in-plane rotating direction in the region inside of said view window, the movement being detected from said calculated difference, c) detecting contour lines of said object images, which are embedded in the plurality of said cut-out images, said contour lines extending in an annular direction, calculating the difference between images, which represent said detected contour lines extending in the annular direction, and detecting a movement of said image in the radial direction in the region inside of said view window, the movement being detected from said calculated difference, d) compensating for components of a movement of a background in said cut-out images, which fall in the region inside of said view window, in accordance with said detected movement of said image in the in-plane parallel direction, in the in-plane rotating direction, and/or in the radial direction, a plurality of images, in which the components of the movement of the background have been compensated for, being thereby obtained, e) calculating the difference between the plurality of said images, in which the components of the movement of the background have been compensated for, a contour line of an object, which shows a movement different from the movement of the background, being thereby detected, f) extracting all of components of said detected contour line of said object showing a movement different from the movement of the background, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said object showing a movement different from the movement of the background, g) detecting azimuths and intensities of said extracted components of said detected contour line of said object, which shows the movement different from the movement of the background, with respect to the center point of said view window, the azimuths and the intensities being detected as azimuth vectors, h) composing a vector from said azimuth vectors, a vector for a travel of said view window being thereby determined, i) causing the center point of said view window to travel in a direction heading towards said object in accordance with said vector for the travel of said view window, and j) determining an extraction area, from which the target object image showing a movement with respect to the background is to be extracted, in accordance with the size and/or the shape of said object, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area. - View Dependent Claims (10, 11)
the extraction of said components of said detected contour line of said object is carried out by extracting all of contour line components, which are tilted at a predetermined angle with respect to an annular direction in the complex-log mapped plane, from the contour line, which has been detected in said complex-log mapped image, and said azimuth vectors are detected by detecting azimuths and intensities of the extracted contour line components in said complex-log mapped plane.
-
-
12. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) cutting out a first image, which falls in a region inside of a view window having a predetermined size, from said image, b) detecting a contour line of an object, which is embedded in said cut-out first image, c) after a predetermined time has elapsed, cutting out a second image, which falls in the region inside of said view window, from said image, d) detecting a contour line of an object, which is embedded in said cut-out second image, e) calculating the difference between said contour line, which has been detected from said first image, and said contour line, which has been detected from said second image, f) detecting a movement of a background from said calculated difference, g) subtracting said detected movement of said background from said image, an object, which shows a movement different from the movement of said background, being thereby detected, h) recognizing said object, which shows a movement different from the movement of said background, as said target object image, i) detecting a vector directed towards said target object image as a vector for a travel of the view window, causing the center point of said view window to travel in accordance with said vector for the travel of said view window, and determining an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area.
-
-
13. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) cutting out an image, which falls in a region inside of a view window having a predetermined size, from said image, b) detecting a contour line of said target object image, which line extends in a predetermined direction, from said cut-out image, c) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, d) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as first azimuth vectors, e) composing a vector from said first azimuth vectors, a first travel vector being thereby determined, f) extracting a region, which approximately coincides in color with said target object image, from said cut-out image, g) detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, h) detecting said azimuth and said distance as a second travel vector i) composing a vector from said first and second travel vectors, a vector for a travel of said view window being thereby determined, j) causing the center point of said view window to travel in accordance with said vector for the travel of said view window, and determining an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area.
-
-
14. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the, extraction of said target object image is carried out by;
a) setting a view window, which has a predetermined size, on said image, said image being an image including a movement, cutting out a plurality of images, which fall in a region inside of said view window, at a plurality of times having a predetermined time difference therebetween, and detecting a contour line of said target object image, which line extends in a predetermined direction, from one of the plurality of said cut-out images, b) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, c) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as first azimuth vectors, d) composing a vector from said first azimuth vectors, a first travel vector being thereby determined, e) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, calculating the difference between images, which represent said detected contour lines, and detecting a movement of said image in an in-plane parallel direction in the region inside of said view window, the movement being detected from said calculated difference, f) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, said contour lines extending in a radial direction with respect to the center point of said view window, calculating the difference between images, which represent said detected contour lines extending in the radial direction, and detecting a movement of said image in an in-plane rotating direction in the region inside of said view window, the movement being detected from said calculated difference, g) detecting contour lines of said object images, which are embedded in the plurality of said cut-out images, said contour lines extending in an annular direction, calculating the difference between images, which represent said detected contour lines extending in the annular direction, and detecting a movement of said image in the radial direction in the region inside of said view window, the movement being detected from said calculated difference, h) compensating for components of a movement of a background in said cut-out images, which fall in the region inside of said view window, in accordance with said detected movement of said image in the in-plane parallel direction, in the in-plane rotating direction, and/or in the radial direction, a plurality of images, in which the components of the movement of the background have been compensated for, being thereby obtained, i) calculating the difference between the plurality of said images, in which the components of the movement of the background have been compensated for, a contour line of an object, which shows a movement different from the movement of the background, being thereby detected, j) extracting all of components of said detected contour line of said object showing a movement different from the movement of the background, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said object showing a movement different from the movement of the background, k) detecting azimuths and intensities of said extracted components of said detected contour line of said object, which shows the movement different from the movement of the background, with respect to the center point of said view window, the azimuths and the intensities being detected as second azimuth vectors, l) composing a vector from said second azimuth vectors, a second travel vector being thereby determined, m) composing a vector from said first and second travel vectors, a vector for a travel of said view window being thereby determined, n) causing the center point of said view window to travel in accordance with said vector for the travel of said view window, and o) determining an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area.
-
-
15. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) setting a view window, which has a predetermined size, on said image, said image being an image including a movement, cutting out a plurality of images, which fall in a region inside of said view window, at a plurality of times having a predetermined time difference therebetween, and detecting a contour line of said target object image, which line extends in a predetermined direction, from one of the plurality of said cut-out images, b) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, c) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as first azimuth vectors, d) composing a vector from said first azimuth vectors, a first travel vector being thereby determined, e) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, calculating the difference between images, which represent said detected contour lines, and detecting a movement of said image in an in-plane parallel direction in the region inside of said view window, the movement being detected from said calculated difference, f) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, said contour lines extending in a radial direction with respect to the center point of said view window, calculating the difference between images, which represent said detected contour lines extending in the radial direction, and detecting a movement of said image in an in-plane rotating direction in the region inside of said view window, the movement being detected from said calculated difference, g) detecting contour lines of said object images, which are embedded in the plurality of said cut-out images, said contour lines extending in an annular direction, calculating the difference between images, which represent said detected contour lines extending in the annular direction, and detecting a movement of said image in the radial direction in the region inside of said view window, the movement being detected from said calculated difference, h) compensating for components of a movement of a background in said cut-out images, which fall in the region inside of said view window, in accordance with said detected movement of said image in the in-plane parallel direction, in the in-plane rotating direction, and/or in the radial direction, a plurality of images, in which the components of the movement of the background have been compensated for, being thereby obtained, i) calculating the difference between the plurality of said images, in which the components of the movement of the background have been compensated for, a contour line of an object, which shows a movement different from the movement of the background, being thereby detected, j) extracting all of components of said detected contour line of said object showing a movement different from the movement of the background, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said object showing a movement different from the movement of the background, k) detecting azimuths and intensities of said extracted components of said detected contour line of said object, which shows the movement different from the movement of the background, with respect to the center point of said view window, the azimuths and the intensities being detected as second azimuth vectors, l) composing a vector from said second azimuth vectors, a second travel vector being thereby determined, m) extracting a region, which approximately coincides in color with said target object image, from one of the plurality of said cut-out images, n) detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, o) detecting said azimuth and said distance as a third travel vector, p) composing a vector from said first, second, and third travel vectors, a vector for a travel of said view window being thereby determined, q) causing the center point of said view window to travel in accordance with said vector for the travel of said view window, and r) determining an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area.
-
-
16. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) cutting out a first image, which falls in a region inside of a view window having a predetermined size, from said image, b) detecting a contour line of an object, which is embedded in said cut-out first image, c) after a predetermined time has elapsed, cutting out a second image, which falls in the region inside of said view window, from said image, d) detecting a contour line of an object, which is embedded in said cut-out second image, e) calculating the difference between said contour line, which has been detected from said first image, and said contour line, which has been detected from said second image, f) detecting a movement of a background from said calculated difference, g) subtracting said detected movement of said background from said image, an object, which shows a movement different from the movement of said background, being thereby detected, h) recognizing said object, which shows a movement different from the movement of said background, as said target object image, i) detecting a vector directed towards said target object image as a first travel vector, j) detecting a contour line of said target object image, which line extends in a predetermined direction, from said cut-out first image, k) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, l) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as second azimuth vectors, m) composing a vector from said second azimuth vectors, a second travel vector being thereby determined, n) extracting a region, which approximately coincides in color with said target object image, from said cut-out first image, o) detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, p) detecting said azimuth and said distance as a third travel vector, q) composing a vector from said first, second, and third travel vectors, a vector for a travel of said view window being thereby determined, r) causing the center point of said view window to travel in accordance with said vector for the travel of said view window, and s) determining an extraction area in accordance with the size and/or the shape of said target object image, the center point of said view window, which has thus been caused to travel, being taken as a reference during the determination of said extraction area.
-
-
17. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) creating a map of a potential field of the whole area of said image, and b) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area.
-
-
18. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) cutting out an image, which falls in a region inside of a view window having a predetermined size, from said image, b) detecting a contour line of said target object image, which line extends in a predetermined direction, from said cut-out image, c) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, d) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as azimuth vectors, e) composing a vector from said azimuth vectors, the composed vector being taken as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, f) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, g) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and h) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area. - View Dependent Claims (19)
the extraction of said components of said detected contour line is carried out by extracting all of contour line components, which are tilted at a predetermined angle with respect to an annular direction in the complex-log mapped plane, from the contour line, which has been detected in said complex-log mapped image, and said azimuth vectors are detected by detecting azimuths and intensities of the extracted contour line components in said complex-log mapped plane.
-
-
20. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) cutting out an image, which falls in a region inside of a view window having a predetermined size, from said image, b) extracting a region, which approximately coincides in color with said target object image, from said cut-out image, c) detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, d) detecting said azimuth and said distance as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, e) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, f) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area. - View Dependent Claims (21, 22)
a region, which exhibits a high degree of coincidence in color with said target object image, and a region, which exhibits a low degree of coincidence in color with said target object image and is located at a position spaced apart from said region exhibiting a high degree of coincidence in color with said target object image, are caused to compete with each other, said region, which exhibits a low degree of coincidence in color with said target object image, being thereby erased, regions, which exhibit a high degree of coincidence in color with said target object image and are located at positions spaced apart from each other, are caused to compete with each other, a region exhibiting a high degree of coincidence in color with said target object image, which region has a size and a shape appropriate for the region to be selected, is kept unerased, whereas a region exhibiting a high degree of coincidence in color with said target object image, which region has a size and a shape inappropriate for the region to be selected, is erased, whereby a region, which is most appropriate in the region inside of said view window, is selected as a target object image region, and an azimuth and a distance of said selected object image region are detected with respect to the center point of said view window.
-
-
23. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) setting a view window, which has a predetermined size, on said image, said image being an image including a movement, cutting out a plurality of images, which fall in a region inside of said view window, at a plurality of times having a predetermined time difference therebetween, detecting contour lines of object images, which are embedded in the plurality of said cut-out images, calculating the difference between images, which represent said detected contour lines, and detecting a movement of said image in an in-plane parallel direction in the region inside of said view window, the movement being detected from said calculated difference, b) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, said contour lines extending in a radial direction with respect to the center point of said view window, calculating the difference between images, which represent said detected contour lines extending in the radial direction, and detecting a movement of said image in an in-plane rotating direction in the region inside of said view window, the movement being detected from said calculated difference, c) detecting contour lines of said object images, which are embedded in the plurality of said cut-out images, said contour lines extending in an annular direction, calculating the difference between images, which represent said detected contour lines extending in the annular direction, and detecting a movement of said image in the radial direction in the region inside of said view window, the movement being detected from said calculated difference, d) compensating for components of a movement of a background in said cut-out images, which fall in the region inside of said view window, in accordance with said detected movement of said image in the in-plane parallel direction, in the in-plane rotating direction, and/or in the radial direction, a plurality of images, in which the components of the movement of the background have been compensated for, being thereby obtained, e) calculating the difference between the plurality of said images, in which the components of the movement of the background have been compensated for, a contour line of an object, which shows a movement different from the movement of the background, being thereby detected, f) extracting all of components of said detected contour line of said object showing a movement different from the movement of the background, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said object showing a movement different from the movement of the background, g) detecting azimuths and intensities of said extracted components of said detected contour line of said object, which shows the movement different from the movement of the background, with respect to the center point of said view window, the azimuths and the intensities being detected as azimuth vectors, h) composing a vector from said azimuth vectors, the composed vector being taken as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, i) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, j) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and k) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area. - View Dependent Claims (24, 25)
the extraction of said components of said detected contour line of said object is carried out by extracting all of contour line components, which are tilted at a predetermined angle with respect to an annular direction in the complex-log mapped plane, from the contour line, which has been detected in said complex-log mapped image, and said azimuth vectors are detected by detecting azimuths and intensities of the extracted contour line components in said complex-log mapped plane.
-
-
26. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) cutting out a first image, which falls in a region inside of a view window having a predetermined size, from said image, b) detecting a contour line of an object, which is embedded in said cut-out first image, c) after a predetermined time has elapsed, cutting out a second image, which falls in the region inside of said view window, from said image, d) detecting a contour line of an object, which is embedded in said cut-out second image, e) calculating the difference between said contour line, which has been detected from said first image, and said contour line, which has been detected from said second image, f) detecting a movement of a background from said calculated difference, g) subtracting said detected movement of said background from said image, an object, which shows a movement different from the movement of said background, being thereby detected, h) recognizing said object, which shows a movement different from the movement of said background, as said target object image, i) detecting a vector directed towards said target object image as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, j) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, k) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and l) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area.
-
-
27. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) cutting out an image, which falls in a region inside of a view window having a predetermined size, from said image, b) detecting a contour line of said target object image, which line extends in a predetermined direction, from said cut-out image, c) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, d) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as first azimuth vectors, e) composing a vector from said first azimuth vectors, a first travel vector being thereby determined, f) extracting a region, which approximately coincides in color with said target object image, from said cut-out image, g) detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, h) detecting said azimuth and said distance as a second travel vector i) composing a vector from said first and second travel vectors, the composed vector being taken as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, j) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, k) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and l) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area.
-
-
28. A learning method for a neural network, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) setting a view window, which has a predetermined size, on said image, said image being an image including a movement cutting out a plurality of images, which fall in a region inside of said view window, at a plurality of times having a predetermined time difference therebetween, and detecting a contour line of said target object image, which line extends in a predetermined direction, from one of the plurality of said cut-out images, b) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, c) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as first azimuth vectors, d) composing a vector from said first azimuth vectors, a first travel vector being thereby determined, e) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, calculating the difference between images, which represent said detected contour lines, and detecting a movement of said image in an in-plane parallel direction in the region inside of said view window, the movement being detected from said calculated difference, f) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, said contour lines extending in a radial direction with respect to the center point of said view window, calculating the difference between images, which represent said detected contour lines extending in the radial direction, and detecting a movement of said image in an in-plane rotating direction in the region inside of said view window, the movement being detected from said calculated difference, g) detecting contour lines of said object images, which are embedded in the plurality of said cut-out images, said contour lines extending in an annular direction, calculating the difference between images, which represent said detected contour lines extending in the annular direction, and detecting a movement of said image in the radial direction in the region inside of said view window, the movement being detected from said calculated difference, h) compensating for components of a movement of a background in said cut-out images, which fall in the region inside of said view window, in accordance with said detected movement of said image in the in-plane parallel direction, in the in-plane rotating-direction, and/or in the radial direction, a plurality of images, in which the components of the movement of the background have been compensated for being thereby obtained, i) calculating the difference between the plurality of said images, in which the components of the movement of the background have been compensated for, a contour line of an object, which shows a movement different from the movement of the background, being thereby detected, j) extracting all of components of said detected contour line of said object showing a movement different from the movement of the background, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said object showing a movement different from the movement of the background, k) detecting azimuths and intensities of said extracted components of said detected contour line of said object, which shows the movement different from the movement of the background, with respect to the center point of said view window, the azimuths and the intensities being detected as second azimuth vectors, l) composing a vector from said second azimuth vectors, a second travel vector being thereby determined, m) composing a vector from said first and second travel vectors, the composed vector being taken as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, n) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, o) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and p) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area.
-
-
29. A learning method for a neural, comprising the steps of:
-
i) extracting a target object image, for which learning operations are to be carried out, from an image, ii) feeding a signal, which represents the extracted target object image, into a neural network, and iii) carrying out the learning operations of said neural network in accordance with said input target object image wherein the extraction of said target object image is carried out by;
a) setting a view window, which has a predetermined size, on said image, said image being an image including a movement, cutting out a plurality of images, which fall in a region inside of said view window, at a plurality of times having a predetermined time difference therebetween, and detecting a contour line of said target object image, which line extends in a predetermined direction, from one of the plurality of said cut-out images, b) extracting all of components of said detected contour line, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said target object image, c) detecting azimuths and intensities of said extracted components with respect to the center point of said view window, the azimuths and the intensities being detected as first azimuth vectors, d) composing a vector from said first azimuth vectors, a first travel vector being thereby determined, e) detecting contour lines of object images, which are embedded in the plurality of said cut-out images, calculating the difference between images, which represent said detected contour lines, and detecting a movement of said image in an in-plane parallel direction in the region inside of said view window, the movement being detected from said calculated difference, f) detecting contour lines of said object images, which are embedded in the plurality of said cut-out images, said contour lines extending in a radial direction with respect to the center point of said view window, calculating the difference between images, which represent said detected contour lines extending in the radial direction, and detecting a movement of said image in an in-plane rotating direction in the region inside of said view window, the movement being detected from said calculated difference, g) detecting contour lines of said object images, which are embedded in the plurality of said cut-out images, said contour lines extending in an annular direction, calculating the difference between images, which represent said detected contour lines extending in the annular direction, and detecting a movement of said image in the radial direction in the region inside of said view window, the movement being detected from said calculated difference, h) compensating for components of a movement of a background in said cut-out images, which fall in the region inside of said view window, in accordance with said detected movement of said image in the in-plane parallel direction, in the in-plane rotating direction, and/or in the radial direction, a plurality of images, in which the components of the movement of the background have been compensated for, being thereby obtained, i) calculating difference, from among the plurality of said images in which the components of the movement of the background have been compensated for, thereby detecting a contour line of an object, which shows a movement different from the movement of the background, j) extracting all of components of said detected contour line of said object showing a movement different from the movement of the background, which are tilted at a predetermined angle with respect to circumferential directions of concentric circles surrounding the center point of said view window, from said detected contour line of said object showing a movement different from the movement of the background, k) detecting azimuths and intensities of said extracted components of said detected contour line of said object, which shows the movement different from the movement of the background, with respect to the center point of said view window, the azimuths and the intensities being detected as second azimuth vectors, l) composing a vector from said second azimuth vectors, a second travel vector being thereby determined, m) extracting a region, which approximately coincides in color with said target object image, from one of the plurality of said cut-out images, n) detecting an azimuth and a distance of said extracted region with respect to the center point of said view window, o) detecting said azimuth and said distance as a third travel vector, p) composing a vector from said first, second, and third travel vectors, the composed vector being taken as a gradient vector of a potential field in a Cartesian plane having its origin at the center point of said view window, q) scanning the whole area of said image with said view window, thereby calculating the gradient vectors of the potential field with respect to the whole area of said image, r) creating a map of the potential field of the whole area of said image from the gradient vectors of the potential field, which have been calculated with respect to the whole area of said image, and s) determining an extraction area in accordance with the size and/or the shape of said target object image, a minimum point of the potential in said map being taken as a reference during the determination of said extraction area.
-
Specification