Visual positioning and navigation device and method thereof
First Claim
1. A visual positioning and navigation device, comprising:
- a motion module, configured to drive a robot and acquire current pose information of the robot in real time;
a camera module comprising an image sensor that captures an environmental image during movement of the robot, wherein the camera module is mounted on top of the robot, and wherein the environment image is a ceiling photo;
an image processing module, configured to perform feature extraction and feature description for the environmental image captured by the image sensor of the camera module; and
a pose estimation module, configured to build a feature database comprising a plurality of features and compare the feature description of the environmental image to the features in the feature database, calculate a pose correction of the robot, and obtain a corrected robot pose based on the current pose information and the pose correction,wherein the motion module is configured to obtain a deflection angle θ
of the robot by a gyroscope, and obtain a movement distance d of the robot by a photoelectric pulse counter on a wheel of the robot, wherein the current pose information of the robot is calculated by the equation set;
Rθ
2=Rθ
1+θ
;
Rx2=Rx1+d*cos(θ
); and
Ry2=Ry1+d*sin(θ
), andwherein Rθ
2 is an angle in a polar coordinate system of the robot in a current pose, Rθ
1 is an angle in the polar coordinate system of the robot in a previous pose, Rx2 is an x-component in the polar coordinate system of the robot in the current pose, Rx1 is an x-component in the polar coordinate system of the robot in the previous pose, Ry2 is a y-component in the polar coordinate system of the robot in the current pose, Ry1 is a y-component in the polar coordinate system of the robot in the previous pose.
2 Assignments
0 Petitions
Accused Products
Abstract
The present invention discloses a visual positioning and navigation device, comprising: a motion module, configured to drive a robot accordingly, and acquire a current pose information of the robot in real time; a camera module, configured to capture an environmental image during the movement of the robot; an image processing module, configured to perform the feature extraction and the feature description for the environmental image; and a pose estimation module, configured to match the feature point description of the environmental image, build a feature database, calculate the pose correction of the robot, and obtain the corrected robot pose based on the robot current pose and the pose correction. The visual positioning and navigation device and method thereof can build the scene map by detecting and tracking the feature information of the ORB feature points of the indoor ceiling, so as to achieve the accurate positioning and navigation of the robot.
-
Citations
12 Claims
-
1. A visual positioning and navigation device, comprising:
-
a motion module, configured to drive a robot and acquire current pose information of the robot in real time; a camera module comprising an image sensor that captures an environmental image during movement of the robot, wherein the camera module is mounted on top of the robot, and wherein the environment image is a ceiling photo; an image processing module, configured to perform feature extraction and feature description for the environmental image captured by the image sensor of the camera module; and a pose estimation module, configured to build a feature database comprising a plurality of features and compare the feature description of the environmental image to the features in the feature database, calculate a pose correction of the robot, and obtain a corrected robot pose based on the current pose information and the pose correction, wherein the motion module is configured to obtain a deflection angle θ
of the robot by a gyroscope, and obtain a movement distance d of the robot by a photoelectric pulse counter on a wheel of the robot, wherein the current pose information of the robot is calculated by the equation set;
Rθ
2=Rθ
1+θ
;
Rx2=Rx1+d*cos(θ
); and
Ry2=Ry1+d*sin(θ
), andwherein Rθ
2 is an angle in a polar coordinate system of the robot in a current pose, Rθ
1 is an angle in the polar coordinate system of the robot in a previous pose, Rx2 is an x-component in the polar coordinate system of the robot in the current pose, Rx1 is an x-component in the polar coordinate system of the robot in the previous pose, Ry2 is a y-component in the polar coordinate system of the robot in the current pose, Ry1 is a y-component in the polar coordinate system of the robot in the previous pose. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A visual positioning and navigation method, comprising
driving a robot and acquiring current pose information of the robot in real time during the driving; -
capturing an environmental image during movement of the robot, wherein the environmental image is a ceiling photo; performing feature extraction and feature description for the environmental image captured; building a feature database comprising a plurality of features; and comparing the feature description of the environmental image to the features in the feature database that was built, calculating a pose correction of the robot, and obtaining a corrected robot pose based on the current pose information and the pose correction, wherein the step of acquiring the current pose information of the robot in real time comprises; obtaining a deflection angle θ
of the robot by a gyroscope, and obtaining a movement distance d of the robot by a photoelectric pulse counter on a wheel of the robot, wherein the current pose information of the robot is calculated by the equation set;
Rθ
2=Rθ
1+θ
;
Rx2=Rx1+d*cos(θ
); and
Ry2=Ry1+d*sin(θ
), andwherein Rθ
2 is an angle in a polar coordinate system of the robot in a current pose, Rθ
1 is an angle in the polar coordinate system of the robot in a previous pose, Rx2 is an x-component in the polar coordinate system of the robot in the current pose, Rx1 is an x-component in the polar coordinate system of the robot in the previous pose, is a y-component in the polar coordinate system of the robot in the current pose, and Ry1 is a y-component in the polar coordinate system of the robot in the previous pose. - View Dependent Claims (8, 9, 10, 11, 12)
-
Specification