Systems and methods for generating three-dimensional models using sensed position data
First Claim
1. A computer-implemented method for generating a three-dimensional (3D) model, the method comprising:
- receiving, by one or more computing devices, a first set of sensed position data indicative of an orientation of a camera device used to acquire a first two-dimensional (2D) image, wherein the first set of sensed position data is provided by a sensor of the camera device used to acquire the first 2D image;
receiving, by the one or more computing devices, a second set of sensed position data indicative of an orientation of a camera device used to acquire a second two-dimensional (2D) image, wherein the second set of sensed position data is provided by a sensor of the camera device used to acquire the second 2D Image;
determining, by the one or more computing devices, a sensed rotation matrix for an image pair comprising the first and second 2D images using the first and second sets of sensed position data;
identifying, by the one or more computing devices, a calculated camera transformation matrix for the image pair, the calculated transformation comprising a calculated translation vector and a calculated rotation matrix, wherein identifying a calculated camera transformation, comprises,deriving a plurality of candidate calculated transformation matrices using a set of matching points of the first and second 2D images, wherein the candidate calculated transformation matrices each comprise a translation component and a calculated rotation matrix;
identifying a candidate calculated transformation matrix of the plurality of candidate calculated transformation matrices that is associated with the lowest transformation error; and
identifying the candidate calculated transformation matrix that is associated with the lowest transformation error as the calculated camera transformation;
generating, by the one or more computing devices, a sensed camera transformation matrix for the image pair, the sensed camera transformation comprising a translation vector and the sensed rotation matrix;
identifying, by the one or more computing devices, a set of matching points of the first and second 2D images;
determining, by the one or more computing devices, whether a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation; and
in response to determining, by the one or more computing devices, that a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation, generating a 3D model using the sensed camera transformation; and
storing, by the one or more computing devices, the 3D model in an 3D model repository.
2 Assignments
0 Petitions
Accused Products
Abstract
Embodiments include a computer-implemented method for generating a three-dimensional (3D) model. The method includes receiving a first and second sets of sensed position data indicative of a position of a camera device(s) at or near a time when it is used to acquire first and second images of an image pair, respectively, determining a sensed rotation matrix and/or a sensed translation vector for the image pair using the first and second sets of sensed position data, identifying a calculated transformation including a calculated translation vector and rotation matrix, generating a sensed camera transformation including the sensed rotation matrix and/or the sensed translation vector, and, if the sensed camera transformation is associated with a lower error than the calculated camera transformation, using it to generate a 3D model.
11 Citations
19 Claims
-
1. A computer-implemented method for generating a three-dimensional (3D) model, the method comprising:
-
receiving, by one or more computing devices, a first set of sensed position data indicative of an orientation of a camera device used to acquire a first two-dimensional (2D) image, wherein the first set of sensed position data is provided by a sensor of the camera device used to acquire the first 2D image; receiving, by the one or more computing devices, a second set of sensed position data indicative of an orientation of a camera device used to acquire a second two-dimensional (2D) image, wherein the second set of sensed position data is provided by a sensor of the camera device used to acquire the second 2D Image; determining, by the one or more computing devices, a sensed rotation matrix for an image pair comprising the first and second 2D images using the first and second sets of sensed position data; identifying, by the one or more computing devices, a calculated camera transformation matrix for the image pair, the calculated transformation comprising a calculated translation vector and a calculated rotation matrix, wherein identifying a calculated camera transformation, comprises, deriving a plurality of candidate calculated transformation matrices using a set of matching points of the first and second 2D images, wherein the candidate calculated transformation matrices each comprise a translation component and a calculated rotation matrix; identifying a candidate calculated transformation matrix of the plurality of candidate calculated transformation matrices that is associated with the lowest transformation error; and identifying the candidate calculated transformation matrix that is associated with the lowest transformation error as the calculated camera transformation; generating, by the one or more computing devices, a sensed camera transformation matrix for the image pair, the sensed camera transformation comprising a translation vector and the sensed rotation matrix; identifying, by the one or more computing devices, a set of matching points of the first and second 2D images; determining, by the one or more computing devices, whether a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation; and in response to determining, by the one or more computing devices, that a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation, generating a 3D model using the sensed camera transformation; and storing, by the one or more computing devices, the 3D model in an 3D model repository. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A non-transitory computer readable medium comprising program instructions stored thereon that are executable by a processor to cause the following steps for generating a three-dimensional (3D) model:
-
receiving a first set of sensed position data indicative of an orientation of a camera device used to acquire a first two-dimensional (2D) image, wherein the first set of sensed position data is provided by a sensor of the camera device used to acquire the first 2D image; receiving a second set of sensed position data indicative of an orientation of a camera device used to acquire a second two-dimensional (2D) image, wherein the second set of sensed position data is provided by a sensor of the camera device used to acquire the second 2D Image; determining a sensed rotation matrix for an image pair comprising the first and second 2D images using the first and second sets of sensed position data; identifying a calculated camera transformation matrix for the image pair, the calculated transformation comprising a calculated translation vector and a calculated rotation matrix, wherein identifying a calculated camera transformation, comprises, deriving a plurality of candidate calculated transformation matrices using a set of matching points of the first and second 2D images, wherein the candidate calculated transformation matrices each comprise a translation component and a calculated rotation matrix; identifying a candidate calculated transformation matrix of the plurality of candidate calculated transformation matrices that is associated with the lowest transformation error; and identifying the candidate calculated transformation matrix that is associated with the lowest transformation error as the calculated camera transformation; generating a sensed camera transformation matrix for the image pair, the sensed camera transformation comprising a translation vector and the sensed rotation matrix; identifying a set of matching points of the first and second 2D images; determining whether a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation; and in response to determining that a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation, generating a 3D model using the sensed camera transformation; and storing the 3D model in an 3D model repository. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A computer-implemented method for generating a three-dimensional (3D) model, the method comprising:
-
receiving, by one or more computing devices, a first set of sensed position data indicative of a position of a camera device used to acquire a first two-dimensional (2D) image, wherein the first set of sensed position data is provided by an integrated sensor of the camera device used to acquire the first 2D image; receiving, by the one or more computing devices, a second set of positioning sensor data indicative of a position of a camera device used to acquire a second two-dimensional (2D) image, wherein the second set of sensed position data is provided by an integrated sensor of the camera device used to acquire the second 2D image; determining, by the one or more computing devices, a sensed rotation matrix and/or a sensed translation vector between the first and second 2D images using the first and second sets of sensed position data; identifying, by the one or more computing devices, a calculated camera transformation comprising a calculated translation vector and a calculated rotation matrix, wherein identifying a calculated camera transformation, comprises, deriving a plurality of candidate calculated transformation matrices using a set of matching points of the first and second 2D images, wherein the candidate calculated transformation matrices each comprise a translation component and a calculated rotation matrix; identifying a candidate calculated transformation matrix of the plurality of candidate calculated transformation matrices that is associated with the lowest transformation error; and identifying the candidate calculated transformation matrix that is associated with the lowest transformation error as the calculated camera transformation; generating, by the one or more computing devices, a sensed camera transformation comprising the sensed rotation matrix and/or the sensed translation vector; identifying, by the one or more computing devices, a set of matching points of the first and second 2D images; determining, by the one or more computing devices, whether a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation; and in response to determining, by the one or more computing devices, that a first error associated with a transformation of the set of matching points using the sensed camera transformation is less than a second error associated with a transformation of the set of matching points using the calculated camera transformation, generating a 3D model using the sensed camera transformation; and storing, by the one or more computing devices, the 3D model in an 3D model repository.
-
Specification