3D positioning apparatus and method
First Claim
1. An apparatus for positioning an object, the object undergoing movement from a first position to a second position in a three-dimensional (3D) space, said apparatus comprising:
- an image sensor for capturing images of the object, wherein said object has more than two feature points and a reference point derived from said feature points, said feature points having predetermined geometric relationships thereamong in the 3D space, said reference point and said feature points being on a common plane in the 3D space; and
a processor coupled to said image sensor for receiving and processing the images captured thereby, said processor being configured to calculate, on the basis of the images and the predetermined geometric relationships among said feature points, initial coordinates of each of the feature points when the object is in the first position, initial coordinates of the reference point when the object is in the first position, final coordinates of the reference point when the object is in the second position, and final coordinates of each of the feature points when the object is in the second position;
wherein said processor is further configured to calculate 3D translational information of the feature points on the basis of the initial and final coordinates of the reference point; and
wherein said processor is further configured to calculate 3D rotational information of the feature points on the basis of the initial and final coordinates of each of the feature points;
wherein said processor calculates the initial coordinates of each of the feature points by;
calculating from at least one image of the object when it is in the first position a depth coordinate of said each of the feature points when the object is in the first position; and
calculating a horizontal coordinate and a vertical coordinate of said each of the feature points when the object is in the first position on the basis of a relation among the horizontal coordinate, the vertical coordinate, and the depth coordinate of said each of the feature points; and
wherein said processor calculates the depth coordinate of each of the feature points when the object is in the first position by;
calculating a first distance between one of the feature points and a projection center of said apparatus;
calculating a second distance between another one of the feature points and the projection center of said apparatus; and
if a difference between the first and second distances is greater than or equal to a threshold distance, calculating a third distance from the projection center to a point between said one of the feature points and said another one of the feature points according to the first distance and the second distance, and calculating the depth coordinate of said each of the feature points when the object is in the first position according to the third distance.
1 Assignment
0 Petitions
Accused Products
Abstract
A 3D positioning apparatus is used for an object that includes feature points and a reference point. The object undergoes movement from a first to a second position. The 3D positioning apparatus includes: an image sensor for capturing images of the object; and a processor for calculating, based on the captured images, initial coordinates of each feature point when the object is in the first position, initial coordinates of the reference point, final coordinates of the reference point when the object is in the second position, and final coordinates of each feature point. The processor calculates 3D translational information of the feature points using the initial and final coordinates of the reference point, and 3D rotational information of the feature points using the initial and final coordinates of each feature point. A 3D positioning method is also disclosed.
-
Citations
17 Claims
-
1. An apparatus for positioning an object, the object undergoing movement from a first position to a second position in a three-dimensional (3D) space, said apparatus comprising:
- an image sensor for capturing images of the object, wherein said object has more than two feature points and a reference point derived from said feature points, said feature points having predetermined geometric relationships thereamong in the 3D space, said reference point and said feature points being on a common plane in the 3D space; and
a processor coupled to said image sensor for receiving and processing the images captured thereby, said processor being configured to calculate, on the basis of the images and the predetermined geometric relationships among said feature points, initial coordinates of each of the feature points when the object is in the first position, initial coordinates of the reference point when the object is in the first position, final coordinates of the reference point when the object is in the second position, and final coordinates of each of the feature points when the object is in the second position;
wherein said processor is further configured to calculate 3D translational information of the feature points on the basis of the initial and final coordinates of the reference point; and
wherein said processor is further configured to calculate 3D rotational information of the feature points on the basis of the initial and final coordinates of each of the feature points;
wherein said processor calculates the initial coordinates of each of the feature points by;
calculating from at least one image of the object when it is in the first position a depth coordinate of said each of the feature points when the object is in the first position; and
calculating a horizontal coordinate and a vertical coordinate of said each of the feature points when the object is in the first position on the basis of a relation among the horizontal coordinate, the vertical coordinate, and the depth coordinate of said each of the feature points; and
wherein said processor calculates the depth coordinate of each of the feature points when the object is in the first position by;
calculating a first distance between one of the feature points and a projection center of said apparatus;
calculating a second distance between another one of the feature points and the projection center of said apparatus; and
if a difference between the first and second distances is greater than or equal to a threshold distance, calculating a third distance from the projection center to a point between said one of the feature points and said another one of the feature points according to the first distance and the second distance, and calculating the depth coordinate of said each of the feature points when the object is in the first position according to the third distance. - View Dependent Claims (2, 3, 4, 5, 6, 7)
- an image sensor for capturing images of the object, wherein said object has more than two feature points and a reference point derived from said feature points, said feature points having predetermined geometric relationships thereamong in the 3D space, said reference point and said feature points being on a common plane in the 3D space; and
-
8. A method for causing an image sensor and a processor of a 3D positioning apparatus to determine a position of an object that undergoes movement from a first position to a second position in a three-dimensional (3D) space, said method comprising:
- (a) using the image sensor, capturing images of the object, the object having more than two identifiable feature points and a reference point derived from the feature points, the feature points having predetermined geometric relationships thereamong in the 3D space, the reference point and the feature points being on a common plane in the 3D space;
(b) using the processor, which is coupled to the image sensor for receiving the images captured thereby, calculating, on the basis of the images and the predetermined geometric relationships among the feature points, initial coordinates of each of the feature points when the object is in the first position, initial coordinates of the reference point when the object is in the first position, final coordinates of the reference point when the object is in the second position, and final coordinates of each of the feature points when the object is in the second position;
(c) using the processor, calculating 3D translational information of the feature points on the basis of the initial and final coordinates of the reference point; and
(d) using the processor, calculating 3D rotational information of the feature points on the basis of the initial and final coordinates of each of the feature points;
wherein, in step (b), the processor calculates the initial coordinates of each of the feature points by;
(b1) calculating from at least one image of when the object is in the first position a depth coordinate of said each of the feature points when the object is in the first position; and
(b2) calculating a horizontal coordinate and a vertical coordinate of said each of the feature points when the object is in the first position on the basis of a relation among the horizontal coordinate, the vertical coordinate, and the depth coordinate of said each of the feature points; and
wherein, in step (b1), the processor calculates the depth coordinate of each of the feature points when the object is in the first position by;
calculating a first distance between one of the feature points and a projection center of the 3D positioning apparatus;
calculating a second distance between another one of the feature points and the projection center of the 3D positioning apparatus; and
if a difference between the first and second distances is greater than or equal to a threshold distance, calculating a third distance from the projection center to a point between said one of the feature points and said another one of the feature points according to the first distance and the second distance, and calculating the depth coordinate of said each of the feature points when the object is in the first position according to the third distance. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15, 16, 17)
- (a) using the image sensor, capturing images of the object, the object having more than two identifiable feature points and a reference point derived from the feature points, the feature points having predetermined geometric relationships thereamong in the 3D space, the reference point and the feature points being on a common plane in the 3D space;
Specification