Autonomous vehicle and motion control therefor
First Claim
1. An autonomous vehicle, comprising:
- a corresponding point detection unit for detecting corresponding points from at least two consecutive images obtained through a cameraan orientation measuring unit for computing a rotation matrix based on orientation information of the autonomous vehicle;
an epipolar computation unit for computing epipolar geometry information based on the rotation matrix and translation information;
a motion analysis unit for analyzing motion of the autonomous vehicle based on the computed epipolar geometry information; and
a three-dimensional (3D) information analysis unit for analyzing 3D information of the object existing in front of the autonomous vehicle based on the computed epipolar geometry information,wherein when a relationship between a coordinate system of the camera and a coordinate system of the images is represented by using the rotation matrix and the translation information, the translation information is determined by applying the rotation matrix and coordinates of at least two corresponding points to the relationship, where the orientation information represents orientation information between the two consecutive images, and the translation information represents translation information with respect to an earliest obtained image of the two consecutive images.
1 Assignment
0 Petitions
Accused Products
Abstract
There are provided an autonomous vehicle, and an apparatus and method for estimating the motion of the autonomous vehicle and detecting three-dimensional (3D) information of an object appearing in front of the moving autonomous vehicle. The autonomous vehicle measures its orientation using an acceleration sensor and a magnetic flux sensor, and extracts epipolar geometry information using the measured orientation information. Since the corresponding points between images required for extracting the epipolar geometry information can be reduced to two, it is possible to more easily and correctly obtain motion information of the autonomous vehicle and 3D information of an object in front of the autonomous vehicle.
58 Citations
49 Claims
-
1. An autonomous vehicle, comprising:
-
a corresponding point detection unit for detecting corresponding points from at least two consecutive images obtained through a camera an orientation measuring unit for computing a rotation matrix based on orientation information of the autonomous vehicle; an epipolar computation unit for computing epipolar geometry information based on the rotation matrix and translation information; a motion analysis unit for analyzing motion of the autonomous vehicle based on the computed epipolar geometry information; and a three-dimensional (3D) information analysis unit for analyzing 3D information of the object existing in front of the autonomous vehicle based on the computed epipolar geometry information, wherein when a relationship between a coordinate system of the camera and a coordinate system of the images is represented by using the rotation matrix and the translation information, the translation information is determined by applying the rotation matrix and coordinates of at least two corresponding points to the relationship, where the orientation information represents orientation information between the two consecutive images, and the translation information represents translation information with respect to an earliest obtained image of the two consecutive images. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. An apparatus for estimating the motion of an autonomous vehicle, comprising:
-
a corresponding point detection unit for detecting corresponding points from at least two consecutive images obtained through a camera; an orientation measuring unit for computing orientation information of the autonomous vehicle; an epipolar computation unit for computing epipolar geometry information based on the rotation matrix and translation information; and a motion analysis unit for analyzing the motion of the autonomous vehicle based on the computed epipolar geometry information, wherein when a relationship between a coordinate system of the camera and a coordinate system of the images is represented by using the rotation matrix and the translation information, the translation information is determined by applying the rotation matrix and coordinates of at least two corresponding points to the relationship, where the orientation information represents orientation information between the two consecutive images, and the translation information represents translation information with respect to an earliest obtained image of the two consecutive images. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. An apparatus for detecting three-dimensional (3D) information of an object existing in front of an autonomous vehicle, comprising:
-
a corresponding point detection unit for detecting corresponding points from at least two consecutive images obtained through a camera; an orientation measuring unit for computing a rotation matrix based on orientation information of the autonomous vehicle; an epipolar computation unit for computing epipolar geometry information based on the rotation matrix and translation information; and a 3D information analysis unit for analyzing 3D information of the object existing in front of the autonomous vehicle based on the computed epipolar geometry information, wherein when a relationship between a coordinate system of the camera and a coordinate system of the images is represented by using the rotation matrix and the translation information, the translation information is determined by applying the rotation matrix and coordinates of at least two corresponding points to the relationship, where the orientation information represents orientation information between the two consecutive images, and the translation information represents translation information with respect to an earliest obtained image of the two consecutive images. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23)
-
-
24. A method for controlling the motion of an autonomous vehicle comprising the steps of:
-
(a) detecting corresponding points from at least two consecutive images obtained through a camera; (b) computing a rotation matrix based on orientation information of the autonomous vehicle; (c) computing epipolar geometry information based on the rotation matrix and translation information; (d) analyzing the motion of the autonomous vehicle based on the computed epipolar geometry information; and (e) analyzing three-dimensional information of an object existing in front of the autonomous vehicle based on the computed epipolar geometry information, wherein when a relationship between a coordinate system of the camera and a coordinate system of the images is represented by using the rotation matrix and the translation information, the translation information is determined by applying the rotation matrix and coordinates of at least two corresponding points to the relationship, where the orientation information represents orientation information between the two consecutive images, and the translation information represents translation information with respect to an earliest obtained image of the two consecutive images. - View Dependent Claims (25, 26, 27, 28, 29, 30, 31, 47)
-
-
32. A method for estimating the motion of an autonomous vehicle, comprising the steps of:
-
(a) detecting corresponding points from at least two consecutive images obtained through a camera; (b) computing a rotation matrix based on orientation information of the autonomous vehicle; (c) computing epipolar geometry information based on the rotation matrix and translation information; and (d) analyzing motion of the autonomous vehicle based on the computed epipolar geometry information, wherein when a relationship between a coordinate system of the camera and a coordinate system of the images is represented by using the rotation matrix and the translation information, the translation information is determined by applying the rotation matrix and coordinates of at least two corresponding points to the relationship, where the orientation information represents orientation information between the two consecutive images, and the translation information represents translation information with respect to an earliest obtained image of the two consecutive images. - View Dependent Claims (33, 34, 35, 36, 37, 38, 48)
-
-
39. A method for detecting three-dimensional (3D) information of an object existing in front of an autonomous vehicle, comprising the steps of:
-
(a) detecting corresponding points from at least two consecutive images obtained through a camera; (b) computing a rotation matrix based on orientation information of the autonomous vehicle; (c) computing epipolar geometry information based on the rotation matrix and translation information; and (d) analyzing 3D information of the object existing in front of the autonomous vehicle based on the computed epipolar geometry information, wherein when a relationship between a coordinate system of the camera and a coordinate system of the images is represented by using the rotation matrix and the translation information, the translation information is determined by applying the rotation matrix and coordinates of at least two corresponding points to the relationship, where the orientation information represents orientation information between the two consecutive images, and the translation information represents translation information with respect to an earliest obtained image of the two consecutive images. - View Dependent Claims (40, 41, 42, 43, 44, 45, 46, 49)
-
Specification