Target detecting system and method
First Claim
1. A target detecting method for detecting a target from an input image from a camera, comprising the steps of:
- a shape detector configured for obtaining camera movement parameters from a movement area in an input image from the camera, and correcting an image frame using the obtained movement parameters,extracting movement candidate areas from the input image frame and a previously input image frame,extracting image feature information from the input image, andextracting a target shape based on the movement candidate areas and the image feature information;
a tracker is configured for generating blob information from the extracted shape, andwherein the blob information is an image or an image portion formed by binding adjacent pixels having the same characteristics in the image;
extracting blob information for the extracted shape; and
checking whether the target is correct by comparing the blob information with that of the shape extracted at the previous frame regardless of background or the camera movement,wherein the extracting image feature information from the input image further comprises skin color areas are extracted from the input image, skin color being extracted as follows;
wherein Ctr, Ctg, and Ctb are respectively r, g, and b values of the t-th frame images, H and S are respectively H(ue) and S(aturation) values obtained from Ctr, Ctg, and Ctb, α
, β
, γ
, δ
, ε
, η
, ω
, ζ
are constants obtained by experiments, and w is a width of the image.
1 Assignment
0 Petitions
Accused Products
Abstract
A target detecting system and method for detecting a target from an input image is provided. According to the target detecting system and method, when a target is detected from an input image and there are moving areas in the input image, camera movement parameters are obtained, image frames are transformed, and movement candidate areas are extracted from the image frame and the previous input image frame. In addition, image feature information is extracted from the input image, and based on the movement candidate areas and the image feature information a shape of the target is extracted. Therefore, the target can be exactly and rapidly extracted and tracked.
23 Citations
36 Claims
-
1. A target detecting method for detecting a target from an input image from a camera, comprising the steps of:
-
a shape detector configured for obtaining camera movement parameters from a movement area in an input image from the camera, and correcting an image frame using the obtained movement parameters, extracting movement candidate areas from the input image frame and a previously input image frame, extracting image feature information from the input image, and extracting a target shape based on the movement candidate areas and the image feature information; a tracker is configured for generating blob information from the extracted shape, and wherein the blob information is an image or an image portion formed by binding adjacent pixels having the same characteristics in the image; extracting blob information for the extracted shape; and checking whether the target is correct by comparing the blob information with that of the shape extracted at the previous frame regardless of background or the camera movement, wherein the extracting image feature information from the input image further comprises skin color areas are extracted from the input image, skin color being extracted as follows; wherein Ctr, Ctg, and Ctb are respectively r, g, and b values of the t-th frame images, H and S are respectively H(ue) and S(aturation) values obtained from Ctr, Ctg, and Ctb, α
, β
, γ
, δ
, ε
, η
, ω
, ζ
are constants obtained by experiments, and w is a width of the image.- View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18, 19, 20, 21)
wherein f1-1, ft, and ft+1 are (t−
1)-th, t-th, and (t+1)-th frames respectively, T1 is a threshold value for a brightness difference between frames, f(x,y) is a brightness value of a pixel (x,y) at a frame, and mt+1 (x,y) is a motion pixel (x,y) of (t+1)-th frame.
-
-
5. The target detecting method of claim 4, wherein the threshold value (T1) satisfies 10≦
- T1≦
20.
- T1≦
-
6. The target detecting method of claim 3, wherein the motion pixel is extracted from the entire pixels of the input image.
-
7. The target detecting method of claim 3, wherein the obtaining camera movement parameters from the movement area in the input image from the camera, and correcting the image frame using the obtained movement parameters further comprises whether the movement area is greater than a predetermined threshold value (T2) is determined by checking whether a difference between a maximum value x and a minimum value x among connected components of the motion pixels is greater than a predetermined threshold value (T2) in comparison with the entire image width.
-
8. The target detecting method of claim 1, wherein the camera movement parameter is defined to correct the camera right/left movement and includes x/y-direction movement parameters.
-
9. The target detecting method of claim 8, wherein the x-direction movement parameter is extracted for a part of the image from the edge compared values between the pixels of a predetermined row i of the t-th image frame and the pixels of a predetermined row j of the (t−
- 1)-th image frame.
-
10. The target detecting method of claim 9, wherein the x-direction movement parameter (pan) is extracted as follows:
-
wherein y is a height of the image in the range of 0 to h, k is a possible range which the camera will move right/left, j is a range of x values, l is a y area range for which edge values are detected, Et(y+l, j) and Et−
1(y+l, j+k) are an edge value of the pixels (y+l, j) and (y+l, j+k) on t-th and (t−
1)-th edge image frames, respectively, and ht(x) is a defined value to obtain an x-direction movement parameter.
-
-
11. The target detecting method of claim 8, wherein the y-direction movement parameter is extracted for a part of the image from the edge compared values between pixels of a predetermined row i of the t-th image frame and pixels of a predetermined row j of the (t−
- 1)-th image frame.
-
12. The target detecting method of claim 11, wherein the y-direction movement parameter (tilt) is extracted as follows:
-
wherein x is a width of the image in the range of 0 to w, k is a possible range which the camera will move up/down, j is a range of y values, l is an x area range for which edge values are detected, Et(i, x+l) and Et−
1(i+k, x+l) are an edge value of the pixels (i, x+l) and (i+k, x+l) on t-th and (t−
1)-th edge image frames, respectively, and vt(y) a defined value to obtain a y-direction movement parameter.
-
-
13. The target detecting method of claim 1, wherein the obtaining camera movement parameters from the movement area in the input image from the camera, and correcting the image frame using the obtained movement parameters further comprises correcting an image frame using the obtained camera movement parameters, x′
- and y′
values of the t-th frame are transformed to be x′
=x+pan and y′
=y+tilt by using x-direction movement parameter (pan), y-direction movement parameter (tilt), and x and y of the (t−
1)-th frame.
- and y′
-
14. The target detecting method of claim 1, wherein the extracting movement candidate areas from the input image frame and the previously input image frame further comprises the movement candidate areas are extracted by generating an image difference between the t-th frame image and the (t−
- 1)-th frame image.
-
15. The target detecting method of claim 1, wherein the image feature information includes at least one among skin color, shape, and corner information.
-
16. The target detecting method of claim 1, wherein the extracting image feature information from the input image further comprises the shape feature information is extracted by a color clustering method of the input image.
-
18. The target detecting method of claim 1, wherein the extracting the target shape based on the movement candidate areas and the image feature information further comprises skin color areas are extracted from the movement candidate areas and then the target image is extracted using the shape feature information and the edge feature information.
-
19. The target detecting method of claim 1, wherein the blob information includes at least one among blob position, blob size, blob shape, and color distribution.
-
20. The target detecting method of claim 1, wherein the checking whether the target is correct by comparing the blob information with that of the shape extracted at the previous frame further comprises:
-
removing blobs corresponding to noise by matching the blob information for the extracted target shape information with the blob information of the shape extracted at the previous frame; and checking whether the target is correct by extracting a color distribution of the extracted blobs.
-
-
21. The target detecting method of claim 20, wherein the removing blobs corresponding to noise by matching the blob information for the extracted target shape information with the blob information of the shape extracted at the previous frame further comprises determining whether the blobs correspond to noise based on the size of the blob.
-
17. A target detecting method for detecting a target from an input image from a camera, comprising the steps of:
-
a shape detector configured for obtaining camera movement parameters from a movement area in an input image from the camera, and correcting an image frame using the obtained movement parameters, extracting movement candidate areas from the input image frame and a previously input image frame, extracting image feature information from the input image, and extracting a target shape based on the movement candidate areas and the image feature information; a tracker is configured for generating blob information from the extracted shape, and wherein the blob information is an image or an image portion formed by binding adjacent pixels having the same characteristics in the image; extracting blob information for the extracted shape; and checking whether the target is correct by comparing the blob information with that of the shape extracted at the previous frame regardless of background or the camera movement, wherein the extracting image feature information from the input image further comprises the edge information is extracted from the input image, the edge information, that is, the edge value (Et) of the pixel (x,y) being extracted as follows; wherein Et is the t-th edge image frame, Etv is the vertical direction edge value of the pixel (x, y), Eth is the horizontal direction edge value of the pixel (x, y), and ft(x,y) is the brightness value of the pixel (x,y).
-
-
22. A target detecting system for detecting a target from an input image, comprising:
-
a camera that obtains a target image to be detected and which transmits the obtained target image to a shape detector; a shape detector, operatively coupled to said camera, said shape detector extracting a shape from the obtained target image based on movement candidate areas and image feature information extracted through an image difference of temporally-consecutive frames among an image sequence transmitted from the image input unit, a tracker that extracts blob information for the extracted shape and verifies whether the target is correct by comparing the blob information with the shape extracted at the previous frame regardless of background or the camera movement, and wherein the blob information is an image or an image portion formed by binding adjacent pixels having the same characteristics in the image, wherein the shape detector further comprises; a movement area extractor for extracting the movement candidate area from an image frame and a previous input image frame; an image feature information extractor for extracting image feature information from the input image; and a shape extractor for extracting the shape of the target image based on the movement candidate area and the image feature information, and wherein the image feature information extractor extracts at least one among skin color, shape, and corner information from the input image, wherein; the skin color is extracted as follows wherein Ctr, Ctg, Ctb are respectively r, g, and b values of the t-th frame images, H and S are H(ue) and S(aturation) values obtained from Ctr, Ctg, and Ctb, α
, β
, γ
, δ
, ε
, η
, ω
, ζ
are constants obtained by experiment, and w is a width of the image;the edge value (Et) of the pixel (x,y) is obtained as follows, wherein is the Et is the t-th edge image frame, Etv is the vertical direction edge value of the pixel (x, y), Eth is the horizontal direction edge value of the pixel (x, y), and ft(x,y) is the brightness value of the pixel (x,y); and the shape feature information is extracted through color clustering of the input image. - View Dependent Claims (23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36)
wherein ft−
1, tt, and ft+1 are respectively (t−
1)-th, t-th, (t+1)-th frames, T1 is a threshold value for brightness difference between frames, f(x,y) is a brightness value of a pixel (x,y) at a frame, and mt+1(x,y) is a motion pixel (x,y) of the (t+1)-th frame.
-
-
26. The system for detecting a target of claim 25, wherein the threshold value (T1) satisfies 10≦
- T1≦
20.
- T1≦
-
27. The system for detecting a target of claim 24, wherein the motion pixel is extracted from the entire pixels of the input image.
-
28. The system for detecting a target of claim 24, wherein the movement area extractor checks, on there being motion pixels, whether a difference between a maximum value x and a minimum value x among connected components of the motion pixels is greater than a predetermined threshold value (T2) in comparison with the entire image width when the movement area is greater than a predetermined threshold value (T2).
-
29. The target detecting system of claim 23, wherein the camera movement parameters are defined to correct the right/left movement of the camera and include an x-direction movement parameter and a y-direction movement parameter, the x-direction movement parameter being extracted from the edge compared values between pixels of a predetermined row i of the t-th image frame and pixels of a predetermined row j of the (t−
- 1)-th image frame for a part of the image, and the y-direction movement parameter being extracted from the edge compared values between pixels of a predetermined row i of the t-th image frame and pixels of a predetermined row j of the (t−
1)-th image frame for a part of the image.
- 1)-th image frame for a part of the image, and the y-direction movement parameter being extracted from the edge compared values between pixels of a predetermined row i of the t-th image frame and pixels of a predetermined row j of the (t−
-
30. The target detecting system of claim 29, wherein the x-direction movement parameter (pan) is extracted as follows:
-
wherein y is a height of the image in the range of 0 to h, k is a possible range which the camera will move right/left, j is a range of x values, l is a y area range checking edge values, Et(y+l, j) and Et−
1(y+l, j+k) are an edge values of the pixels (y+l ,j) and (y+l, j+k) on t-th and (t−
1)-th edge image frames, respectively, and ht(x) is a defined value to obtain an x-direction movement parameter.
-
-
31. The target detecting system of claim 29, wherein the y-direction movement parameter (tilt) is extracted as follows:
-
wherein x is a width of the image in the range of 0 to w, k is a possible range which the camera will move up/down, j is a range of y values, l is an x area range checking edge values, Et(i, x+l) and Et−
1(i+k, x+l) are an edge values of the pixels (i, x+l) and (i+k, x+l) on t-th and (t−
1)-th edge image frames, respectively, and vt(y) a defined value to obtain a y-direction movement parameter.
-
-
32. The target detecting system of claim 31, wherein the movement area extractor corrects the t-th frame image by x′
- and y′
values of the t-th frame obtained as x′
=x+pan and y′
=y+tilt using the x-direction movement parameter (pan), the y-direction movement parameter (tilt), and x and y of the (t−
1)-th frame, and extracts the movement candidate areas by generating an image difference between the t-th frame image and (t−
1)-th frame image.
- and y′
-
33. The target detecting system of claim 22, wherein the image characteristic information includes at least one among skin color, shape, and corner information.
-
34. The target detecting system of claim 22, wherein the shape extractor extracts the skin color areas from the movement candidate areas and then extracts the target area using the edge feature.
-
35. The target detecting system of claim 22, wherein the tracker includes a blob information extractor for extracting the blob information for the extracted shape of the target;
- and
a human shape tracker for removing blobs corresponding to noise by matching the blob information for the extracted target shape information with the blob information of the shape extracted at the previous frame and checking whether the target is correct by extracting a color distribution of the extracted blobs.
- and
-
36. The target detecting system of claim 22, wherein the blob information includes at least one among blob position, blob size, blob shape, and color distribution.
Specification