Image processing apparatus
First Claim
1. In an image processing apparatus having a processor for processing input signals defining images of an object taken from a plurality of undefined camera positions, a method of processing the input signals to produce signals defining matching features in the images, the method comprising the steps of:
- (a) identifying matching features in the images using a first technique;
(b) calculating the camera positions using identified matching features;
(c) determining an accuracy of the calculated camera positions; and
(d) if the accuracy of the calculated camera positions is below a threshold, identifying further matching features in the images using a second technique and matching features identified by a user.
1 Assignment
0 Petitions
Accused Products
Abstract
In an apparatus and method for creating a three-dimensional model of an object, images of the object taken from different, unknown positions are processed to identify the points in the images which correspond to the same point on the actual object (that is “matching” points), the matching points are used to determine the relative positions from which the images were taken, and the matching points and calculated positions are used to calculate points in a three-dimensional space representing points on the object. A number of different techniques are used to identify the matching points, and a number of solutions are calculated and tested for the relative positions, the solution which is consistent with the largest number of matching points being selected. In one matching technique, edges in an image are identified by first identifying corner points in the image and then identifying edges between the corner points on the basis of edge orientation values of pixels, the edges are processed in strength order to remove cross-overs, the images sub-divided into regions by connecting points at the ends of the edges on the basis of the edge strengths, and matching points within corresponding regions in two or more images are identified.
178 Citations
122 Claims
-
1. In an image processing apparatus having a processor for processing input signals defining images of an object taken from a plurality of undefined camera positions, a method of processing the input signals to produce signals defining matching features in the images, the method comprising the steps of:
-
(a) identifying matching features in the images using a first technique;
(b) calculating the camera positions using identified matching features;
(c) determining an accuracy of the calculated camera positions; and
(d) if the accuracy of the calculated camera positions is below a threshold, identifying further matching features in the images using a second technique and matching features identified by a user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18)
(e) calculating the camera positions using one of (i) at least some of the matching features identified by the user and (ii) at least some of the matching features calculated using the second technique;
(f) determining an accuracy of the camera positions calculated in step (e); and
(g) if the accuracy determined in step (f) is below a threshold, retrying steps (d) to (f) until the accuracy is equal to, or above, the threshold.
-
-
5. A method according to claim 4, wherein step (e) comprises:
-
calculating the camera positions using features from a first set of matching features; and
calculating the camera positions using features from a second set of matching features.
-
-
6. A method according to claim 5, wherein the first set of matching features comprises features identified by the user;
- and
the second set of matching features comprises one of (i) matching features identified using the first technique or the second technique and (ii) matching features identified using the first technique or the second technique together with matching features identified by the user.
- and
-
7. A method according to claim 4, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein step (e) includes calculating relative positions of the optical center when the images were taken.
-
8. A method according to claim 1, wherein the features identified by the user comprise a first number of features and the further matching features identified using the second technique and using the features identified by the user comprise a second number of features, and wherein the first number is less than the second number.
-
9. A method according to claim 1, wherein the input signals define images of the object taken from at least three undefined camera positions.
-
10. A method according to claim 1, further comprising the step of processing signals defining one of at least some of the matching features identified by the user and at least some of the matching features identified by using the second technique to generate object data defining a model of the object in a three-dimensional space.
-
11. A method according to claim 10, further comprising the step of processing the object data to generate image data.
-
12. A method according to claim 11, further comprising the step of recording the image data.
-
13. A method according to claim 10, further comprising the step of transmitting a signal conveying the object data.
-
14. A method according to claim 10, further comprising the step of recording the object data.
-
15. A method according to claim 1, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein step (b) includes calculating relative positions of the optical center when the images were taken.
-
17. A storage device storing instructions for causing a programmable processing apparatus to become operable to perform a method according to claim 1 or 16.
-
18. A signal carrying instructions for causing a programmable processing apparatus to become operable to perform a method according to claim 1 or 16.
-
16. An image processing method of processing image data comprising images of an object taken from a plurality of imaging positions of undefined relationship, so as to identity corresponding object features in the images, the method comprising:
-
identifying corresponding features using a first technique;
determining the relationship between the imaging positions using the identified features;
testing an accuracy of the determined relationship and, if it is not sufficiently high;
(i) receiving user-input signals identifying further corresponding features; and
(ii) identifying further corresponding features using a second technique and using the features identified in the signals received in step (i).
-
-
19. An image processing apparatus for processing input signals defining images of an object taken from a plurality of undefined camera positions to produce signals defining matching features in the images, comprising:
-
(a) a first feature matcher for identifying matching features in the images using a first technique;
(b) a first position calculator for calculating the camera positions using identified matching features;
(c) a first accuracy calculator for determining an accuracy of the calculated camera positions; and
(d) a second feature matcher for performing processing if the accuracy of the calculated camera positions is below a threshold to identify further matching features in the images using a second technique and matching features identified by a user. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30)
(e) a second position calculator for calculating the camera positions using one of (i) at least some of the matching features identified by the user and (ii) at least some of the matching features calculated using the second technique; and
(f) a second accuracy calculator for determining an accuracy of the camera positions calculated by the second position calculator, the apparatus being controlled such that, if the accuracy determined by the second accuracy calculator is below a threshold, the operations performed by the second feature matcher, the second position calculator and the second accuracy calculator are retried until the accuracy is equal to, or above, the threshold.
-
-
23. Apparatus according to claim 22, wherein the second position calculator is arranged to calculate the camera positions by:
-
calculating the camera positions using features from a first set of matching features; and
calculating the camera positions using features from a second set of matching features.
-
-
24. Apparatus according to claim 23, wherein
the first set of matching features comprises features identified by the user; - and
the second set of matching features comprises one of (i) matching features identified using the first technique or the second technique and (ii) matching features identified using the first technique or the second technique together with matching features identified by the user.
- and
-
25. Apparatus according to claim 19, wherein the features identified by the user comprise a first number of features and wherein the second feature matcher is arranged to operate so that the further matching features identified using the second technique and using the features identified by the user comprise a second number of features such that the second number is greater than the first number.
-
26. Apparatus according to claim 25, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein the second position calculator is arranged to calculate relative positions of the optical center when the images were taken.
-
27. Apparatus according to claim 19, wherein the input signals define images of the object taken from at least three undefined camera positions.
-
28. Apparatus according to claim 19, further comprising an object data generator to generate object data defining a model of the object in a three-dimensional space by processing signals defining one of at least some of the matching features identified by the user and at least some of the matching features identified by using the second technique.
-
29. Apparatus according to claim 28, further comprising an image data generator to generate image data by processing the object data and a display to display an image of the object.
-
30. Apparatus according to claim 19, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein the first position calculator is arranged to calculate relative positions of the optical center when the images were taken.
-
31. In an image processing apparatus having a processor for processing input signals defining images of an object taken from at least three undefined camera positions, a method of processing the input signals to produce signals defining matching features in the images and the camera positions, the method comprising the steps of:
-
(a) identifying matching features in first and second images of the object;
(b) calculating the camera positions for the first and second images using matching features identified in step (a);
(c) identifying further matching features in the first and second images using the camera positions calculated in the step (b);
(d) matching at least one of the further matching features identified in the second image in step (c) with a feature in a third image of the object; and
(e) calculating the camera position for the third image using the matching features identified in the second and third images in step (d). - View Dependent Claims (32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 52, 53)
(i) identifying matching features in the first and second images using a first technique;
(ii) calculating the camera positions for the first and second images using the matching features identified in step (i);
(iii) determining an accuracy of the camera positions calculated in step (ii); and
(iv) if the accuracy calculated in step (iii) is below a threshold, identifying further matching features in the first and second images using a second technique and matching features identified by a user.
-
-
40. A method according to claim 39, wherein the first technique performed in step (i) comprises processing the input signals to identify matching corners in the first and second images.
-
41. A method according to claim 31, wherein steps (a) and (b) comprise:
-
(1) receiving input signals defining matching features in the first and second images identified by a user;
(2) identifying further matching features in the first and second images using the matching features identified in the input signals in step (1);
(3) calculating the camera positions for the first and second images using one of at least some of the matching features identified in step (1) and at least some of the matching features identified in step (2);
(4) determining an accuracy of the camera positions calculated in step (3); and
(5) if the accuracy of the calculated camera positions is below a threshold, retrying steps (1) to (4) until the accuracy is equal to, or above, the threshold.
-
-
42. A method according to claim 31, wherein step (c) comprises processing the input signals to search an area of the second image to identify a feature within the area which matches a feature at a location in the first image, the area searched within the second image being dependent upon the location of the feature in the first image and the camera positions calculated in step (b).
-
43. A method according to claim 31, further comprising the step of processing signals defining at least some of the matching features and the camera positions to generate object data defining a model of the object in a three-dimensional space.
-
44. A method according to claim 43, further comprising the step of processing the object data to generate image data.
-
45. A method according to claim 43, further comprising the step of transmitting a signal conveying the object data.
-
46. A method according to claim 43, further comprising the step of recording the object data.
-
47. A method according to claim 44, further comprising the step of recording the image data.
-
48. A method according to claim 31, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein step (b) includes calculating relative positions of the optical center when the first and second images were taken.
-
49. A method according to claim 31, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein step (e) includes calculating relative positions of the optical center when the second and third images were taken.
-
52. A storage device storing instructions for causing a programmable processing apparatus to become operable to perform a method according to claim 31, 50 or 51.
-
53. A signal carrying instructions for causing a programmable processing apparatus to become operable to perform a method according to any of claims 31, 50 or 51.
-
50. An image processing method of processing image data comprising at least three images of an object taken from imaging positions of undefined relationship, and signals defining corresponding object features in first and second images of the object, so as to determine the relationship between the imaging positions, the method comprising:
-
(a) determining the relationship between the imaging positions of the first and second images using corresponding features defined in the input signals;
(b) identifying at least one further corresponding feature in the first and second images using the relationship determined in step (a);
(c) identifying at least one feature in a third image of the object which corresponds to a further feature identified in the second image in step (b); and
(d) determining the relationship between the imaging positions of the second and third images using the corresponding features identified in step (c).
-
-
51. An image processing method of processing image data comprising at least three images of an object and input signals defining a relationship between the positions at which first and second images of the object were recorded so as to determine a relationship between the positions at which the second image and a third image of the object were recorded, the method comprising:
-
(a) identifying at least one pair of corresponding object features in the first and second images using the relationship defined in the input signals;
(b) identifying at least one feature in the third image which corresponds to a feature identified in the second image in step (a); and
(c) determining the relationship between the positions at which the second and third images were recorded using the corresponding features identified in step (b).
-
-
54. An image processing apparatus for processing input signals defining images of an object taken from at least three undefined camera positions, to produce signals defining matching features in the images and the camera positions, comprising:
-
(a) a first feature matcher for identifying matching features in first and second images of the object;
(b) a first position calculator for calculating the camera positions for the first and second images using matching features identified by the first feature matcher;
(c) a second feature matcher for identifying further matching features in the first and second images using the camera positions calculated by the first position calculator;
(d) a third feature matcher for matching at least one of the further, matching features identified in the second image by the second feature matcher with a feature in a third image of the object; and
(e) a second position calculator for calculating the camera position for the third image using the matching features identified in the second and third images by the third feature matcher. - View Dependent Claims (55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69)
(i) identifying matching features in the first and second images using a first technique;
(ii) calculating the camera positions for the first and second images using the matching features identified in step (i);
(iii) determining an accuracy of the camera positions calculated in step (ii); and
(iv) if the accuracy calculated in step (iii) is below a threshold, identifying further matching features in the first and second images using a second technique and matching features identified by a user.
-
-
63. Apparatus according to claim 62, wherein the first technique performed in step (i) comprises processing the input signals to identify matching corners in the first and second images.
-
64. Apparatus according to claim 54, wherein the first feature matcher and the first position calculator are arranged to operate by:
-
(1) receiving signals defining matching features in the first and second images identified by a user;
(2) identifying further matching features in the first and second images using the matching features identified in the input signals in step (1);
(3) calculating the camera positions for the first and second images using one of at least some of the matching features identified in step (1) and at least some of the matching features identified in step (2);
(4) determining an accuracy of the camera positions calculated in step (3); and
(5) if the accuracy of the calculated camera positions is below a threshold, retrying steps (1) to (4) until the accuracy is equal to, or above, the threshold.
-
-
65. Apparatus according to claim 54, wherein the second feature matcher is arranged to process the input signals to search an area of the second image to identify a feature within the area which matches a feature at a location in the first image, the area searched within the second image being dependent upon the location of the feature in the first image and the camera positions calculated by the first position calculator.
-
66. Apparatus according to claim 54, further comprising an object data generator to generate object data defining a model of the object in a three-dimensional space by processing signals defining at least some of the matching features and the camera positions.
-
67. Apparatus according to claim 66, further comprising an image data generator to generate image data by processing the object data and a display to display an image of the object.
-
68. Apparatus according to claim 54, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein the first position calculator includes an optical center calculator for calculating relative positions of the optical center when the first and second images were taken.
-
69. Apparatus according to claim 54, wherein a camera has an optical center and the optical center has a position when an image is taken, and wherein the second position calculator includes an optical center calculator for calculating relative positions of the optical center when the second and third images were taken.
-
70. In an image processing apparatus having a processor for processing first input signals defining images of an object taken from a plurality of undefined camera positions and second input signals defining matching features in the images, a method of processing the first and second input signals to produce signals defining further matching features in the images, the method comprising the steps of:
-
dividing each image into regions by connecting the matching features defined by the second input signals;
calculating a transformation of corresponding regions between images; and
identifying matching features within corresponding regions using the calculated transformations. - View Dependent Claims (71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 96, 97)
(i) testing the edge of highest combined strength against each edge of lower combined strength, in order of decreasing combined strength, and, if it is determined that the two edges cross, deleting the edge with the lower combined strength;
(ii) testing the edge of next highest combined strength which remains against each edge of lower combined strength which remains, in order of decreasing combined strength and, if it is determined that the two edges cross, deleting the edge with the lower combined strength; and
(iii) retrying step (ii) until the edge of next highest combined strength which remains has the lowest combined strength of the remaining edges.
-
-
81. A method according to claim 76, wherein any three matched features having therebetween two edges having a strength greater than a threshold are connected to form a triangular region in the first image and in the second image.
-
82. A method according to claim 70, wherein, in the step of identifying matching features within corresponding regions, features having an approximately uniform spatial separation in a first of the images are selected for matching against features in a second of the images.
-
83. A method according to claim 82, wherein the features in the first image are selected by applying a grid to divide the first image into areas, and selecting features from the areas.
-
84. A method according to claim 70, wherein the input signals define images of the object taken form at least three undefined camera positions.
-
85. A method according to claim 84, wherein the step of identifying matching features within corresponding regions includes a step of trying to match at least some features in a first image of the object already matched with features in a second image of the object with features in a third image of the object.
-
86. A method according to claim 70, wherein the transformation calculated for corresponding regions between images is an affine transformation.
-
87. A method according to claim 70, further comprising the step of processing the first input signals to generate the second input signals.
-
88. A method according to claim 70, wherein the second input signals comprise the first input signals to display the images to a user, and signals defining matching features identified by a user.
-
89. A method according to claim 70, further comprising the step of processing signals defining at least some of the matching features to generate object data defining a model of the object in a three-dimensional space.
-
90. A method according to claim 89, further comprising the step of processing the object data to generate image data.
-
91. A method according to claim 90, further comprising the step of recording the image data.
-
92. A method according to claim 89, further comprising the step of transmitting a signal conveying the object data.
-
93. A method according to claim 89, further comprising the step of recording the object data.
-
94. A method according to claim 70, wherein the step of calculating the transformation of corresponding regions between images comprises calculating a transformation for each pair of corresponding regions.
-
96. A storage device storing instructions for causing a programmable processing apparatus to become operable to perform a method according to claim 70 or 95.
-
97. A signal carrying instructions for causing a programmable processing apparatus to become operable to perform a method according to claim 70, or 95.
-
95. An image processing method of processing image data comprising images of an object taken from a plurality of imaging positions of undefined relationship and signals defining corresponding object features in the images, so as to identify further corresponding features, the method comprising:
-
notionally dividing each image into triangular segments by connecting the corresponding features defined in the input signals;
determining a mapping of corresponding segments between images; and
identifying corresponding features using the calculated mappings.
-
-
98. An image processing apparatus for processing first input signals defining images of an object taken from a plurality of undefined camera positions and second input signals defining matching features in the images to produce signals defining further matching features in the images, comprising:
-
an image divider for dividing each image into regions by connecting the matching features defined by the second input signals;
a transformation calculator for calculating a transformation of corresponding regions between images; and
a feature matcher for identifying matching features within corresponding regions using the calculated transformations. - View Dependent Claims (99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119)
(i) testing the edge of highest combined strength against each edge of lower combined strength, in order of decreasing combined strength, and, if it is determined that the two edges cross, deleting the edge with the lower combined strength;
(ii) testing the edge of next highest combined strength which remains against each edge of lower combined strength which remains, in order of decreasing combined strength and, if it is determined that the two edges cross, deleting the edge with the lower combined strength; and
(iii) retrying step (ii) until the edge of next highest combined strength which remains has the lowest combined strength of the remaining edges.
-
-
109. Apparatus according to claim 104, wherein the feature connector is arranged to connect any three matched features having therebetween two edges having a strength greater than a threshold to form a triangular region in the first image and in the second image.
-
110. Apparatus according to claim 98, wherein the feature matcher comprises a feature selector for selecting features having an approximately uniform spatial separation in a first of the images and a feature matcher for matching features selected by the feature selector against features in a second of the images.
-
111. Apparatus according to claim 110, wherein the feature selector is arranged to select the features in the first image by applying a grid to divide the first image into areas, and to select features from the areas.
-
112. Apparatus according to claim 98, wherein the input signals define images of the object taken form at least three undefined camera positions.
-
113. Apparatus according to claim 112, wherein the feature matcher is arranged to try to match at least some features in a first image of the object already matched with features in a second image of the object with features in a third image of the object.
-
114. Apparatus according to claim 98, wherein the transformation calculator is arranged to calculate an affine transformation.
-
115. Apparatus according to claim 98, further comprising an image data processor to process the first input signals to generate the second input signals.
-
116. Apparatus according to claim 98, wherein the second input signals comprise signals defining matching features identified by a user.
-
117. Apparatus according to claim 98, further comprising an object data generator to generate object data defining a model of the object in a three-dimensional space by processing signals defining at least some of the matching features.
-
118. Apparatus according to claim 117, further comprising an image data generator to generate image data by processing the object data and a display to display an image of the object.
-
119. Apparatus according to claim 98, wherein the transformation calculator is arranged to calculate a respective transformation for each pair of corresponding regions.
-
120. An image processing apparatus for processing input signals defining images of an object taken from a plurality of undefined camera positions to produce signals defining matching features in the images, comprising:
-
(a) means for identifying matching features in the images using a first technique;
(b) means for calculating the camera positions using identified matching features;
(c) means for determining an accuracy of the calculated camera positions; and
(d) means for performing processing if the accuracy of the calculated camera positions is below a threshold to identify further matching features in the images using a second technique and matching features identified by a user.
-
-
121. An image processing apparatus for processing input signals defining images of an object taken from at least three undefined camera positions, to produce signals defining matching features in the images and the camera positions, comprising:
-
(a) means for identifying matching features in first and second images of the object;
(b) means for calculating the camera positions for the first and second images using matching features identified by means (a);
(c) means for identifying further matching features in the first and second images using the camera positions calculated by means (b);
(d) means for matching at least one of the further matching features identified in the second image by means (c) with a feature in a third image of the object; and
(e) means for calculating the camera position for the third image using the matching features identified in the second and third images by means (d).
-
-
122. An image processing apparatus for processing first input signals defining images of an object taken from a plurality of undefined camera positions and second input signals defining matching features in the images to produce signals defining further matching features in the images, comprising:
-
means for dividing each image into regions by connecting the matching features defined by the second input signals;
means for calculating a transformation of corresponding regions between images; and
means for identifying matching features within corresponding regions using the calculated transformations.
-
Specification