Environment mapping with automatic motion model selection
First Claim
1. A method comprising:
- receiving a video frame captured by a camera device into memory;
estimating a type of motion relative to a previously received video frame held in memory to the received video frame, the estimating of the type of motion including;
for each of at least two motion types, estimating the respective motion type;
applying a metric to each of the at least two estimated motion types; and
selecting a motion type that represents motion from the previously received video frame to the received video frame;
when the type of motion is the same as motion type of a current keyframe group held in memory, adding the received video frame to the current keyframe group; and
when the type of motion is not the same motion type of the current keyframe group held in memory, creating a new keyframe group in memory and adding the received video frame to the new keyframe group.
2 Assignments
0 Petitions
Accused Products
Abstract
Various embodiments each include at least one of systems, methods, devices, and software for environment mapping with automatic motion model selection. One embodiment in the form of a method includes receiving a video frame captured by a camera device into memory and estimating a type of motion from a previously received video frame held in memory to the received video frame. When the type of motion is the same as motion type of a current keyframe group held in memory, the method includes adding the received video frame to the current keyframe group. Conversely, when the type of motion is not the same motion type of the current keyframe group held in memory, the method includes creating a new keyframe group in memory and adding the received video frame to the new keyframe group.
18 Citations
17 Claims
-
1. A method comprising:
-
receiving a video frame captured by a camera device into memory; estimating a type of motion relative to a previously received video frame held in memory to the received video frame, the estimating of the type of motion including; for each of at least two motion types, estimating the respective motion type; applying a metric to each of the at least two estimated motion types; and selecting a motion type that represents motion from the previously received video frame to the received video frame; when the type of motion is the same as motion type of a current keyframe group held in memory, adding the received video frame to the current keyframe group; and when the type of motion is not the same motion type of the current keyframe group held in memory, creating a new keyframe group in memory and adding the received video frame to the new keyframe group. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A non-transitory computer readable medium, with instructions stored thereon, which when executed by the at least one processor cause a computing device to perform data processing activities, the data processing activities comprising:
-
receiving a video frame captured by a camera device into memory; identifying features in the received video frame; estimating a type of motion from a previously received video frame held in memory to the received video frame; when the type of motion is the same as motion type of a current keyframe group held in memory, adding the received video frame to the current keyframe group; when the type of motion is not the same motion type of the current keyframe group held in memory, creating a new keyframe group in memory and adding the received video frame to the new keyframe group; and executing a modeling process against keyframe group data of each of one or more keyframe groups held in memory to generate and maintain keyframe group models held in memory, the modeling process including; triangulating new features of a video frame of a particular keyframe group for which the video frame has not yet been added to a keyframe group model for the particular keyframe group and the new features are not yet present in the keyframe group model of the particular keyframe group; and processing keyframe group models of disjoint keyframe groups to determine whether two or more keyframe group models can be joined and joining disjoint keyframe group models that can be joined. - View Dependent Claims (10, 11, 12, 13, 14)
-
-
15. A device comprising:
-
at least one processor; at least one memory; and an instruction set, stored in the at least one memory and executable by the at least one processor to perform data processing activities, the data processing activities comprising; executing a tracking process including; receiving a video frame into the at least one memory; estimating a type of motion from a previously received video frame held in the at least one memory to the received video frame, the estimating including; for each of at least parallax-inducing motion and rotation-only motion types, estimating the respective motion type; applying a modified Geometric Robust Information Criterion metric to each of the estimated motion types; and selecting a motion type that represents motion from the previously received video frame to the received video frame; when the type of motion is the same as motion type of a current keyframe group held in the at least one memory, adding the received video frame to the current keyframe group; and when the type of motion is not the same motion type of the current keyframe group, creating a new keyframe group in the at least one memory and adding the received video frame to the new keyframe group. - View Dependent Claims (16, 17)
-
Specification