Optimized object scanning using sensor fusion
First Claim
1. A method of capturing images of an object from which to construct a three-dimensional model of the object, the method comprising:
- using an image capture device, capturing a plurality of images of the object in a physical environment from a plurality of vantage points;
identifying points in the plurality of images that are consistently located across all images as background points;
filtering points in the plurality of images to remove background points from use in construction of the three-dimensional model of the object;
using at least one sensor associated with the image capture device, detecting information about one or more of;
motion or pose of the image capture device;
selecting individual ones of the captured plurality of images on the basis of the detected information; and
sending the selected individual ones of the captured plurality of images to a process for constructing the three-dimensional model of the object.
1 Assignment
0 Petitions
Accused Products
Abstract
Sensor fusion is utilized in an electronic device such as a head mounted display (HMD) device that has a sensor package equipped with different sensors so that information that is supplemental to captured 2D images of objects or scenes in a real world environment may be utilized to determine an optimized transform of image stereo-pairs and to discard erroneous data that would otherwise prevent successful scans used for construction of a 3D model in, for example, virtual world applications. Such supplemental information can include one or more of world location, world rotation, image data from an extended field of view (FOV), or depth map data.
-
Citations
17 Claims
-
1. A method of capturing images of an object from which to construct a three-dimensional model of the object, the method comprising:
-
using an image capture device, capturing a plurality of images of the object in a physical environment from a plurality of vantage points; identifying points in the plurality of images that are consistently located across all images as background points; filtering points in the plurality of images to remove background points from use in construction of the three-dimensional model of the object; using at least one sensor associated with the image capture device, detecting information about one or more of;
motion or pose of the image capture device;selecting individual ones of the captured plurality of images on the basis of the detected information; and sending the selected individual ones of the captured plurality of images to a process for constructing the three-dimensional model of the object. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A device operative to perform object scanning using sensor fusion, comprising:
-
an outward-facing image sensor operative to capture images at least one sensor operative to detect one or more of a position, motion, or orientation of the device within the space, wherein the at least one sensor comprises one of tracking camera, inertia sensor, magnetic 6-degrees-of-freedom position sensor, a lighthouse-based laser-scanning system, and synchronized photodiodes on the object being tracked; one or more processors; a data storage system, operative to store images from the outward-facing image sensor, and to store position, motion, or orientation data from the at least one sensor; and a non-transitory machine-readable memory device operative to store instructions, which when executed cause the one or more processors to capture a plurality of images of the scene from respective positions within the space, detect a position, motion, or orientation of the device within the space associated with the capture of each of the plurality of images of the scene, and discard one or more of the plurality of captured images based on the detected position, motion, or orientation of the device at a respective capture location. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory machine-readable memory device operative to store instructions which, when executed by one or more processors disposed in an electronic device, cause the electronic device to:
-
perform object scanning by capturing a plurality of images of an object from a respective plurality of vantage points using a first camera disposed in the electronic device; determine object poses for the scanning using a second camera disposed in the electronic device that has an extended field of view relative to the first camera; generate world tracking metadata for the electronic device at each vantage point; and utilize the world tracking metadata to combine a subset of the plurality of captured images into a three-dimensional model of the object, in which the first camera has higher angular resolution or is configured to capture an increased level of detail relative to the second camera and the tracking metadata is generated using one or more of tracking camera or inertia sensor incorporated in the electronic device. - View Dependent Claims (16, 17)
-
Specification