Method for self-localization of robot based on object recognition and environment information around recognized object
First Claim
1. A method for self-localization of a robot, the robot including a camera unit formed of a stereo video device, a database storing a map around a robot traveling path, and a position arithmetic unit estimating the position of the robot on the basis of image information acquired by the camera unit, the method comprising:
- acquiring an image around the robot in the camera unit;
recognizing, in the position arithmetic unit, an individual object in the image acquired by the camera unit, to generate position values on a camera coordinate system of local feature points of the individual objects and local feature points of a surrounding environment including the individual objects; and
estimating, in the position arithmetic unit, the position of the robot on the basis of the map stored in the database and the position values on the camera coordinate system of local feature points of the individual objects and local feature points of a surrounding environment including the individual objects.
1 Assignment
0 Petitions
Accused Products
Abstract
A method for self-localization of a robot, the robot including a camera unit, a database storing a map around a robot traveling path, and a position arithmetic unit estimating the position of the robot, includes: acquiring an image around the robot, in the camera unit. Further, the method includes recognizing, in the position arithmetic unit, an individual object in the image acquired by the camera unit, to generate position values on a camera coordinate system of local feature points of the individual objects and local feature points of a surrounding environment including the individual objects; and estimating, in the position arithmetic unit, the position of the robot on the basis of the map and the position values on the camera coordinate system of local feature points of the individual objects and local feature points of a surrounding environment including the individual objects.
24 Citations
14 Claims
-
1. A method for self-localization of a robot, the robot including a camera unit formed of a stereo video device, a database storing a map around a robot traveling path, and a position arithmetic unit estimating the position of the robot on the basis of image information acquired by the camera unit, the method comprising:
-
acquiring an image around the robot in the camera unit; recognizing, in the position arithmetic unit, an individual object in the image acquired by the camera unit, to generate position values on a camera coordinate system of local feature points of the individual objects and local feature points of a surrounding environment including the individual objects; and estimating, in the position arithmetic unit, the position of the robot on the basis of the map stored in the database and the position values on the camera coordinate system of local feature points of the individual objects and local feature points of a surrounding environment including the individual objects. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
2. The method of claim 1, wherein estimating the position of the robot includes:
-
selecting nodes, on which at least recognized object is included, from the map stored in the database as candidate nodes, wherein the robot can be located on the candidate node; calculating temporary positions of the robot for all the candidate nodes on the basis of the position values; and applying particle filtering to each candidate node with the calculated temporary position as an initial position value to estimate the position of the robot.
-
-
3. The method of claim 2, wherein the temporary positions of the robot are calculated by Equations 1 and 2,
-
R N T = [ cos ( θ z ) cos ( θ y ) cos ( θ z ) sin ( θ y ) sin ( θ x ) - sin ( θ z ) cos ( θ x ) cos ( θ z ) sin ( θ y ) cos ( θ x ) + sin ( θ z ) sin ( θ x ) t x sin ( θ z ) cos ( θ y ) sin ( θ z ) sin ( θ y ) sin ( θ z ) + cos ( θ z ) cos ( θ x ) sin ( θ z ) sin ( θ y ) cos ( θ x ) - cos ( θ z ) sin ( θ x ) t y - sin ( θ y ) cos ( θ z ) sin ( θ x ) cos ( θ z ) cos ( θ x ) t z 0 0 0 1 ] = [ r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z 0 0 0 1 ] [ Equation 1 ]
NVi=NRTRPi
[Equation 2]where, RPi and NVi are a position value on the camera coordinate system and a position value on an environment map-based coordinate system of local feature points of the same object and the environment around the same object, respectively, NRT is a homogenous transformation matrix of the two position values, and the position of the robot can be expressed by (x,y,z,θ
x,θ
y,θ
z), andwherein the position estimation step comprises; calculating a final weighted value to apply the final weighted value to particle filtering; and estimating the position of the robot by executing Equation 8 on the basis of only samples of a node, on which samples are most converged, where, [S] is the estimated position of the robot, st(n) is a sample of the robot position to be predicted, π
t(n) is a weighted value of a sample st(n) in a repetition step t, and .ns is the total number of random samples.
-
-
4. The method of claim 1, wherein estimating the position of the robot includes:
-
selecting nodes, on which at least recognized object is included, from the map stored in the database as candidate nodes, wherein the robot can be located on the candidate node; calculating temporary positions of the robot for all the candidate nodes on the basis of the position values; selecting a final candidate node from among the candidate nodes; and applying particle filtering to the final candidate node with the temporary position of the robot for the final candidate node as an initial position value to estimate the position of the robot.
-
-
5. The method of claim 4, wherein the temporary positions of the robot are calculated by Equations 1 and 2,
-
R N T = [ cos ( θ z ) cos ( θ y ) cos ( θ z ) sin ( θ y ) sin ( θ x ) - sin ( θ z ) cos ( θ x ) cos ( θ z ) sin ( θ y ) cos ( θ x ) + sin ( θ z ) sin ( θ x ) t x sin ( θ z ) cos ( θ y ) sin ( θ z ) sin ( θ y ) sin ( θ z ) + cos ( θ z ) cos ( θ x ) sin ( θ z ) sin ( θ y ) cos ( θ x ) - cos ( θ z ) sin ( θ x ) t y - sin ( θ y ) cos ( θ z ) sin ( θ x ) cos ( θ z ) cos ( θ x ) t z 0 0 0 1 ] = [ r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z 0 0 0 1 ] [ Equation 1 ]
NVI=NRTRPi
[Equation 2]where, RPi and NVi are a position value on the camera coordinate system and a position value on an environment map-based coordinate system of local feature points of the same object and the environment around the same object, respectively, NRT is a homogenous transformation matrix of the two position values, and the position of the robot can be expressed by (x,y,z,θ
x,θ
y,θ
z), andwherein the position estimation step comprises; calculating a final weighted value to apply the final weighted value to particle filtering; and estimating the position of the robot by executing Equation 8, where, [S] is the estimated position of the robot, st(n) is a sample of the robot position to be predicted, π
t(n) is a weighted value of a sample st(n) in a repetition step t, and ns is the total number of random samples.
-
-
6. The method of claim 4, wherein the final candidate node is a node, which has a maximum node proximity weighted value from among node proximity weighted values calculated by Equation 12 for the candidate nodes,
-
V ( N k ) = π E ( N k ) + π F ( N k ) + π D ( N k ) ∑ k = 1 n N ( π E ( N k ) + π F ( N k ) + π D ( N k ) ) = 1 3 ( π E ( N k ) + π F ( N k ) + π D ( N k ) ) [ Equation 12 ] where, π
V(Nk ) is a node proximity weighted value of the robot for a node Nk, π
E(Nk ) is an object existence weighted value of the node Nk, π
F(Nk ) is an object-environment feature weighted value of the node Nk, π
D(Nk ) is an object-environment distance weighted value of the node Nk, and nN is the number of candidate nodes.
-
-
7. The method of claim 3, wherein the final weighted value is calculated by Equation 13 on the basis of the node proximity weighted value π
-
V(N
k ) indicating a node, around which the robot is located, and a three-dimensional feature point weighted value π
P(n) of the sample st(n) according to a distance difference between the three-dimensional feature points of an object recognized by the robot and the environment around the object and the three-dimensional feature points of the same object on a node and the environment around the object,where, π
(n) is a final weighted value, and nN is the number of candidate nodes.
-
V(N
-
8. The method of claim 5, wherein the final weighted value is calculated by Equation 13 on the basis of the node proximity weighted value π
-
V(N
k ) indicating a node, around which the robot is located, and a three-dimensional feature point weighted value π
P(n) of the sample st(n) according to a distance difference between the three-dimensional feature points of an object recognized by the robot and the environment around the object and the three-dimensional feature points of the same object on a node and the environment around the object,where, π
(n) is a final weighted value, and nN is the number of candidate nodes.
-
V(N
-
9. The method of claim 7, wherein the node proximity weighted value π
-
V(N
k ) is determined by Equation 12,where, π
V(Nk ) is a node proximity weighted value of the robot for a node Nk, π
E(Nk ) is an object existence weighted value of the node Nk, π
F(Nk ) is an object-environment feature weighted value of the node Nk, π
D(Nk ) is an object-environment distance weighted value of the node Nk, and nN is the number of candidate nodes.
-
V(N
-
10. The method of claim 9, wherein the object existence weighted value π
-
E(N
k ) is calculated by Equation 9,where, π
E(Nk ) is an object existence weighted value of a node Nk, E(R) is a collection of objects recognized by the robot, E(Nk ) is a collection of objects on the node Nk, and nE(R∩
Nk ) is the number of same objects from among the objects recognized by the robot and the objects on the node Nk.
-
E(N
-
11. The method of claim 9, wherein the object-environment feature weighted value π
-
F(N
k ) is calculated by Equation 10,where, π
F(Nk ) is an object-environment feature weighted value of a node Nk, F(R) is a collection of the local feature points of all objects recognized by the robot and the local feature points of the environment around the objects, F(Nk ) is a collection of the local feature points of objects on the node Nk and the local feature points of the environment around the objects, nF(R∩
Nk ) is the number of matched local feature points between the local feature points of all objects recognized by the robot and the environment around the objects and the local feature points of the same objects on the node Nk as the objects recognized by the robot and the environment around the objects.
-
F(N
-
12. The method of claim 9, wherein the object-environment distance weighted value π
-
D(N
k ) is calculated by Equation 11,where, π
D(Nk ) is an object-environment distance weighted value of a node Nk, and df(Nk ) is a distance difference between the three-dimensional values of the fitted local feature points and the three-dimensional values of the same local feature points on the candidate node when the three-dimensional values of the local feature points of an object recognized by the robot and the environment around the object are fitted to the three-dimensional values of the local feature points of the same object on the candidate node and the environment around the object.
-
D(N
-
2. The method of claim 1, wherein estimating the position of the robot includes:
-
-
13. A method for self-localization of a robot, the robot including a camera unit formed by a stereo video device having a viewing angle smaller than 360°
- , a rotating device rotating the camera unit by a predetermined angle, a database storing a map around a robot traveling path, and a position arithmetic unit estimating the position of the robot on the basis of image information acquired by the camera unit, the method comprising;
acquiring, in the camera unit, an image around the robot in a predetermined direction; when no object is recognized in the image acquired by the camera unit, rotating, in the position arithmetic unit, the camera unit and repeatedly acquiring an image in another direction until an object is recognized in the acquired image, and generating position values on a camera coordinate system of the local feature points of the recognized individual objects and the local feature points of the surrounding environment including the individual objects; selecting, in the position arithmetic unit, nodes on which at least recognized object is included, from the map stored in the database as candidate nodes, Wherein the robot can be located on the candidate nodes; giving a priority to each candidate node by the position arithmetic unit; calculating, in the position arithmetic unit, a distance between objects on said each candidate node and the robot, and a highest priority object; confirming, in the position arithmetic unit, the highest priority object on said each candidate node by rotating the camera unit; and applying particle filtering to the respective candidate nodes having confirmed highest priority objects, thereby estimating the position of the robot, in the position arithmetic unit. - View Dependent Claims (14)
-
14. The method of claim 13, wherein estimating the position of the robot comprises, if a highest priority object in a candidate node is not recognized, confirming a highest priority object of a next highest priority candidate node from among the candidate nodes according to the given priority.
-
14. The method of claim 13, wherein estimating the position of the robot comprises, if a highest priority object in a candidate node is not recognized, confirming a highest priority object of a next highest priority candidate node from among the candidate nodes according to the given priority.
- , a rotating device rotating the camera unit by a predetermined angle, a database storing a map around a robot traveling path, and a position arithmetic unit estimating the position of the robot on the basis of image information acquired by the camera unit, the method comprising;
Specification
- Resources
-
Current AssigneeKorea Advanced Institute of Science and Technology
-
Original AssigneeKorea Advanced Institute of Science and Technology
-
InventorsPark, Sung Kee, Park, Soon Yong
-
Granted Patent
-
Time in Patent OfficeDays
-
Field of Search
-
US Class Current700/259
-
CPC Class CodesB25J 19/023 including video camera meansB25J 5/00 Manipulators mounted on whe...G05D 1/0251 extracting 3D information f...G05D 1/0274 using mapping information s...G06T 2207/10021 Stereoscopic video; Stereos...G06T 2207/30252 Vehicle exterior; Vicinity ...G06T 7/80 Analysis of captured images...