Methods and Systems for Vision-Based Motion Estimation
First Claim
Patent Images
1. A method for estimating motion, said method comprising:
- receiving an incoming image;
performing feature detection on said received, incoming image to identify a plurality of regions in said received, incoming image, wherein each region in said plurality of regions is associated with a key point in an image coordinate frame;
computing a feature descriptor for each region in said plurality of regions, thereby producing a plurality of feature descriptors for said incoming image;
performing feature matching between said plurality of feature descriptors for said received, incoming image and a plurality of feature descriptors computed for a previous image, thereby producing a plurality of feature correspondences;
for each feature correspondence in said plurality of feature correspondences, projecting said associated key points from said image coordinate frames to a world coordinate frame, thereby producing a plurality of pairs of world coordinates;
computing a motion estimate from said plurality of pairs of world coordinates;
selecting a key pose;
generating a current camera pose in a global coordinate frame;
determining a motion trajectory from said current camera pose; and
updating said plurality of feature descriptors computed for a previous frame to said plurality of feature descriptors for said received, incoming frame.
2 Assignments
0 Petitions
Accused Products
Abstract
Aspects of the present invention are related to methods and systems for vision-based computation of ego-motion.
-
Citations
20 Claims
-
1. A method for estimating motion, said method comprising:
-
receiving an incoming image; performing feature detection on said received, incoming image to identify a plurality of regions in said received, incoming image, wherein each region in said plurality of regions is associated with a key point in an image coordinate frame; computing a feature descriptor for each region in said plurality of regions, thereby producing a plurality of feature descriptors for said incoming image; performing feature matching between said plurality of feature descriptors for said received, incoming image and a plurality of feature descriptors computed for a previous image, thereby producing a plurality of feature correspondences; for each feature correspondence in said plurality of feature correspondences, projecting said associated key points from said image coordinate frames to a world coordinate frame, thereby producing a plurality of pairs of world coordinates; computing a motion estimate from said plurality of pairs of world coordinates; selecting a key pose; generating a current camera pose in a global coordinate frame; determining a motion trajectory from said current camera pose; and updating said plurality of feature descriptors computed for a previous frame to said plurality of feature descriptors for said received, incoming frame. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A mobile agent comprising:
-
a rigidly mounted camera; a processor component; and a non-transitory computer-readable medium encoded with a computer program code for causing said processor component to execute a method for estimating motion, said method comprising; receiving an incoming image; performing feature detection on said received, incoming image to identify a plurality of regions in said received, incoming image, wherein each region in said plurality of regions is associated with a key point in an image coordinate frame; computing a feature descriptor for each region in said plurality of regions, thereby producing a plurality of feature descriptors for said incoming image; performing feature matching between said plurality of feature descriptors for said received, incoming image and a plurality of feature descriptors computed for a previous image, thereby producing a plurality of feature correspondences; for each feature correspondence in said plurality of feature correspondences, projecting said associated key points from said image coordinate frames to a world coordinate frame, thereby producing a plurality of pairs of world coordinates; computing a motion estimate from said plurality of pairs of world coordinates; selecting a key pose; generating a current camera pose in a global coordinate frame; determining a motion trajectory from said current camera pose; and updating said plurality of feature descriptors computed for a previous frame to said plurality of feature descriptors for said received, incoming frame. - View Dependent Claims (16, 17)
-
-
18. A non-transitory computer-readable medium encoded with a computer program code for causing a processor to execute a method for estimating motion, said method comprising:
-
receiving an incoming image; performing feature detection on said received, incoming image to identify a plurality of regions in said received, incoming image, wherein each region in said plurality of regions is associated with a key point in an image coordinate frame; computing a feature descriptor for each region in said plurality of regions, thereby producing a plurality of feature descriptors for said incoming image; performing feature matching between said plurality of feature descriptors for said received, incoming image and a plurality of feature descriptors computed for a previous image, thereby producing a plurality of feature correspondences; for each feature correspondence in said plurality of feature correspondences, projecting said associated key points from said image coordinate frames to a world coordinate frame, thereby producing a plurality of pairs of world coordinates; computing a motion estimate from said plurality of pairs of world coordinates; selecting a key pose; generating a current camera pose in a global coordinate frame; determining a motion trajectory from said current camera pose; and updating said plurality of feature descriptors computed for a previous frame to said plurality of feature descriptors for said received, incoming frame. - View Dependent Claims (19, 20)
-
Specification