Egomotion estimation of an imaging device
First Claim
1. A computer-implemented method comprising:
- under control of one or more processors configured with executable instructions,creating, using information captured by a camera, depth maps of an environment, the depth maps including a first depth map at a first time and second depth map at a second time, the environment including objects;
detecting planar surfaces of stationary objects in the first depth map and the second depth map using a random sampling and consensus algorithm, the stationary objects being a subset of the objects in the environment;
associating at least a first planar surface from the first depth map to a second planar surface in the second depth map, the first planar surface being identified as corresponding to the second planar surface based at least in part on an estimated translation and rotation of the camera between the first time and the second time; and
determining actual translation and rotation of the camera between the first time and the second time using iterative closest point analysis of locations of corresponding points that are located on the first planar surface and the second planar surface.
2 Assignments
0 Petitions
Accused Products
Abstract
Described herein are techniques and systems to determine movement of an imaging device (egomotion) using an analysis of images captured the by imaging device. The imaging device, while in a first position, may capture a first image of an environment. The image may be a depth map, a still photograph, or other type of image that enables identification of objects, reference features, and/or other characteristics of the environment. The imaging device may then capture a second image from a second position within the environment after the imaging devices moves from the first position to the second position. A comparison of corresponding reference features from the first image and second image may be used to determine translation and rotation of the imaging device.
-
Citations
25 Claims
-
1. A computer-implemented method comprising:
-
under control of one or more processors configured with executable instructions, creating, using information captured by a camera, depth maps of an environment, the depth maps including a first depth map at a first time and second depth map at a second time, the environment including objects; detecting planar surfaces of stationary objects in the first depth map and the second depth map using a random sampling and consensus algorithm, the stationary objects being a subset of the objects in the environment; associating at least a first planar surface from the first depth map to a second planar surface in the second depth map, the first planar surface being identified as corresponding to the second planar surface based at least in part on an estimated translation and rotation of the camera between the first time and the second time; and determining actual translation and rotation of the camera between the first time and the second time using iterative closest point analysis of locations of corresponding points that are located on the first planar surface and the second planar surface. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A method comprising:
-
generating, using an imaging device, images of an environment at a first time to create a first frame and at a second time to create a second frame; detecting, via one or more processors, one or more reference features in the first frame and the second frame; associating a first instance of a reference feature from the first frame to a second instance of the reference feature in the second frame as corresponding to a same reference feature, wherein the first instance and the second instance of the reference feature are determined based at least in part on a predicted movement of the imaging device between the first frame and the second frame; determining at least one of translation or rotation of the imaging device between the first time and the second time based at least in part on locations of corresponding points that are located on the first instance of the reference feature at the first time and the second instance of the reference feature at the second time. - View Dependent Claims (7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A system comprising:
-
a projector to project light onto objects in an environment; an imaging device to capture images of the objects in the environment; and one or more processors to execute instructions to; detect reference features of the captured images, the reference features being at least one of planar surfaces or edges of at least a portion the objects in the environment; associate at least a first reference feature from a frame to a second, corresponding, reference feature in a successive frame, the first reference feature and the second reference feature associated based at least in part on a predicted movement of the imaging device between the frame and the successive frame; and determine at least one of translation or rotation of the imaging device between the frame and the successive frame based at least in part on information associated with the first reference feature and the second reference feature. - View Dependent Claims (17, 18, 19, 20, 21)
-
-
22. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed on one or more processors, performs acts comprising:
-
generating, via an imaging device, images of an environment to create at least a first frame and a second frame; detecting, via one or more processors, reference features in the first frame and the second frame; associating a first reference feature from the first frame to a second, corresponding reference feature in the second frame, the first reference feature and the second reference feature associated based at least in part on a predicted movement of the imaging device between the frame and the successive frame; and analyzing information associated with the first reference feature and the second reference feature to determine at least one of translation or rotation of the imaging device between the first frame and the second frame. - View Dependent Claims (23, 24, 25)
-
Specification