Automated three dimensional model generation
First Claim
Patent Images
1. A method, comprising:
- detecting an object within a field of view of an image capture device of a mobile computing device;
displaying a movement element in a graphical user interface together with an image of the object that is within the field of view of the image capture device, the movement element including one or more visual movement instructions for positioning the object within the field of view of the image capture device;
displaying in the graphical user interface, together with the image of the object, a position indicator, wherein a location of the position indicator is proportional to an amount of motion detected in the object, and wherein a predetermined number of key frames are identified by equally dividing a distance between an initial position and a final position of a portion of the object;
continuously moving the position indicator, between a start position and an end position, by an amount determined based on a current position of the object within the field of view and a distance of travel detected for the object;
identifying, within the predetermined number of key frames, first and second key frames corresponding respectively to first and second position changes of the object within the field of view of the image capture device relative to a starting position;
generating first and second depth maps having respectively first and second resolutions, the first depth map being generated based on the identified first key frame and the second depth map being generated based on the second key frame; and
generating a three-dimensional model of the object based on the first and second depth maps.
4 Assignments
0 Petitions
Accused Products
Abstract
In various example embodiments, a system and methods are presented for generation and manipulation of three dimensional (3D) models. The system and methods cause presentation of an interface frame encompassing a field of view of an image capture device. The systems and methods detect an object of interest within the interface frame, generate a movement instruction with respect to the object of interest, and detect a first change in position and a second change in position of the object of interest. The systems and methods generate a 3D model of the object of interest based on the first change in position and the second change in position.
-
Citations
18 Claims
-
1. A method, comprising:
-
detecting an object within a field of view of an image capture device of a mobile computing device; displaying a movement element in a graphical user interface together with an image of the object that is within the field of view of the image capture device, the movement element including one or more visual movement instructions for positioning the object within the field of view of the image capture device; displaying in the graphical user interface, together with the image of the object, a position indicator, wherein a location of the position indicator is proportional to an amount of motion detected in the object, and wherein a predetermined number of key frames are identified by equally dividing a distance between an initial position and a final position of a portion of the object; continuously moving the position indicator, between a start position and an end position, by an amount determined based on a current position of the object within the field of view and a distance of travel detected for the object; identifying, within the predetermined number of key frames, first and second key frames corresponding respectively to first and second position changes of the object within the field of view of the image capture device relative to a starting position; generating first and second depth maps having respectively first and second resolutions, the first depth map being generated based on the identified first key frame and the second depth map being generated based on the second key frame; and generating a three-dimensional model of the object based on the first and second depth maps. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A system, comprising:
-
one or more processors; an image capture device operative coupled to the one or more processors; and a non-transitory processor-readable storage medium storing processor executable instructions that, when executed by the one or more processors, causes the one or more processors to perform operations comprising; detecting an object within a field of view of an image capture device of a mobile computing device; displaying a movement element in a graphical user interface together with an image of the object that is within the field of view of the image capture device, the movement element including one or more visual movement instructions for positioning the object within the field of view of the image capture device; displaying in the graphical user interface, together with the image of the object, a position indicator, wherein a location of the position indicator is proportional to an amount of motion detected in the object, and wherein a predetermined number of key frames are identified by equally dividing a distance between an initial position and a final position of a portion of the object; continuously moving the position indicator, between a start position and an end position, by an amount determined based on a current position of the object within the field of view and a distance of travel detected for the object; identifying, within the predetermined number of key frames, first and second key frames corresponding respectively to first and second position changes of the object within the field of view of the image capture device relative to a starting position; generating first and second depth maps having respectively first and second resolutions, the first depth map being generated based on the identified first key frame and the second depth map being generated based on the second key frame; and generating a three-dimensional model of the object based on the first and second depth maps. - View Dependent Claims (13, 14, 15, 16, 17)
-
-
18. A non-transitory processor-readable storage medium storing processor executable instructions that, when executed by one or more processors of a mobile computing device, causes the mobile computing device to perform operations comprising:
-
detecting an object within a field of view of an image capture device of a mobile computing device; displaying a movement element in a graphical user interface together with an image of the object that is within the field of view of the image capture device, the movement element including one or more visual movement instructions for positioning the object within the field of view of the image capture device; displaying in the graphical user interface, together with the image of the object, a position indicator, wherein a location of the position indicator is proportional to an amount of motion detected in the object, and wherein a predetermined number of key frames are identified by equally dividing a distance between an initial position and a final position of a portion of the object; continuously moving the position indicator, between a start position and an end position, by an amount determined based on a current position of the object within the field of view and a distance of travel detected for the object; identifying, within the predetermined number of key frames, first and second key frames corresponding respectively to first and second position changes of the object within the field of view of the image capture device relative to a starting position; generating first and second depth maps having respectively first and second resolutions, the first depth map being generated based on the identified first key frame and the second depth map being generated based on the second key frame; and generating a three-dimensional model of the object based on the first and second depth maps.
-
Specification