Altering Automatically-Generated Three-Dimensional Models Using Photogrammetry
First Claim
1. A computer-implemented method for creating a three-dimensional model, comprising:
- receiving, by one or more computing devices, an automatically generated three-dimensional model geocoded within a first field of view of a first camera that tools a first two-dimensional photographic image and geocoded within a second field of view of a second camera that took a second two-dimensional photographic image, wherein a first perspective of the first camera that took the first two-dimensional photographic image is represented by a first set of camera parameters for the first two-dimensional photographic image and a second perspective of the second camera that took the second two-dimensional photographic image is represented by a second set of camera parameters for the second two-dimensional photographic image, wherein the first set of camera parameters includes at least a first focal length associated with the first two-dimensional photographic image and a first capture location at which the first two-dimensional photographic image was captured, wherein the second set of camera parameters includes at least a second focal length associated with the second two-dimensional photographic image and a second capture location at which the second two-dimensional photographic image was captured, wherein the first capture location and the second capture location comprise locations in three-dimensional space, and wherein each of the one or more computing devices comprises one or more processors;
receiving, by the one or more computing devices, a first user input constraint indicating that a feature of the automatically generated three-dimensional model corresponds to a first position on the first two-dimensional photographic image;
receiving, by the one or more computing devices, a second user input constraint indicating that the feature of the automatically generated three-dimensional model corresponds to a second position on the second two-dimensional photographic image;
determining, by the one or more computing devices, a first point in three-dimensional space by extending a first ray from the first capture location through the first position on the first two-dimensional photographic image as indicated by the first user input constraint, the first ray having the first focal length;
determining, by the one or more computing devices, a second point in three-dimensional space by extending a second ray from the second capture location through the second position on the second two-dimensional photographic image as indicated by the second user input constraint, the second ray having the second focal length; and
when the first point in three-dimensional space and the second point in three-dimensional space are located at a same position, altering the automatically generated three-dimensional model such that the feature is located at the same position in three-dimensional space.
2 Assignments
0 Petitions
Accused Products
Abstract
Embodiments enable alteration of automatically-generated three-dimensional models using photogrammetry. In an embodiment, a method creates a three-dimensional model using a two-dimensional photographic image. An automatically generated three-dimensional model geocoded within a field of view of a camera that took the two-dimensional photographic image is received. A perspective of the camera that took the photographic image is represented by a set of camera parameters for the first two-dimensional photographic image. A user input constraint indicating that a feature of the automatically generated three-dimensional model corresponds to a position on two-dimensional photographic image is also received. In response to the user input constraint, the three-dimensional model is altered, using photogrammetry, according to the user input constraint and the set of camera parameters.
35 Citations
27 Claims
-
1. A computer-implemented method for creating a three-dimensional model, comprising:
-
receiving, by one or more computing devices, an automatically generated three-dimensional model geocoded within a first field of view of a first camera that tools a first two-dimensional photographic image and geocoded within a second field of view of a second camera that took a second two-dimensional photographic image, wherein a first perspective of the first camera that took the first two-dimensional photographic image is represented by a first set of camera parameters for the first two-dimensional photographic image and a second perspective of the second camera that took the second two-dimensional photographic image is represented by a second set of camera parameters for the second two-dimensional photographic image, wherein the first set of camera parameters includes at least a first focal length associated with the first two-dimensional photographic image and a first capture location at which the first two-dimensional photographic image was captured, wherein the second set of camera parameters includes at least a second focal length associated with the second two-dimensional photographic image and a second capture location at which the second two-dimensional photographic image was captured, wherein the first capture location and the second capture location comprise locations in three-dimensional space, and wherein each of the one or more computing devices comprises one or more processors; receiving, by the one or more computing devices, a first user input constraint indicating that a feature of the automatically generated three-dimensional model corresponds to a first position on the first two-dimensional photographic image; receiving, by the one or more computing devices, a second user input constraint indicating that the feature of the automatically generated three-dimensional model corresponds to a second position on the second two-dimensional photographic image; determining, by the one or more computing devices, a first point in three-dimensional space by extending a first ray from the first capture location through the first position on the first two-dimensional photographic image as indicated by the first user input constraint, the first ray having the first focal length; determining, by the one or more computing devices, a second point in three-dimensional space by extending a second ray from the second capture location through the second position on the second two-dimensional photographic image as indicated by the second user input constraint, the second ray having the second focal length; and when the first point in three-dimensional space and the second point in three-dimensional space are located at a same position, altering the automatically generated three-dimensional model such that the feature is located at the same position in three-dimensional space. - View Dependent Claims (2, 3, 4, 5, 6, 7, 25)
-
-
8. (canceled)
-
9. A system for creating a three-dimensional model, the system comprising:
-
a request module that receives an automatically generated three-dimensional model geocoded within a first field of view of a first camera that took a first two-dimensional photographic image and geocoded within a second field of view of a second camera that took a second two-dimensional photographic image, wherein a first perspective of the first camera that took the first photographic image is represented by a first set of camera parameters for the first two-dimensional photographic image, and wherein a second perspective of the second camera that took the second photographic image is represented by a second set of camera parameters for the second two-dimensional photographic image; wherein the first set of camera parameters comprises at least a first focal length and a first capture location at which the first two-dimensional photographic image was captured and the second set of camera parameters comprises at least a second focal length and a second capture location at which the second two-dimensional photographic image was captured, and wherein the first capture location and the second capture location comprise locations in three-dimensional space; a user constraint module that receives a first user input constraint indicating that a feature of the automatically generated three-dimensional model corresponds to a first position on the first two-dimensional photographic image and receive a second user input constraint indicating that the feature of the automatically generated three-dimensional model corresponds to a second position on the second two-dimensional photographic image; and a photogrammetry module that, in response to the first and second user input constraints; determines a first point in three-dimensional space by extending a first ray from the first capture location through the first position on the first two-dimensional photographic image as indicated by the first user input constraint, the first ray having the first focal length; determines a second point in three-dimensional space by extending a second ray from the second capture location through the second position on the second two-dimensional photographic image as indicated by the second user input constraint, the second ray having the second focal length; and when the first point in three-dimensional space and the second point in three-dimensional space are located at a same position, alters the automatically generated three-dimensional model such that the feature is located at the same position in three-dimensional space. - View Dependent Claims (10, 11, 12, 13, 14, 15, 26)
-
-
16. (canceled)
-
17. A non-transitory computer readable storage medium having instructions tangibly stored thereon that, when executed by a computing device, cause the computing device to execute a method for creating a three-dimensional model, the method comprising:
-
receiving an automatically generated three-dimensional model geocoded within a first field of view of a first camera that took a first two-dimensional photographic image and geocoded within a second field of view of a second camera that took a second two-dimensional photographic image, wherein a first perspective of the first camera that took the first two-dimensional photographic image is represented by a first set of camera parameters for the first two-dimensional photographic image and a second perspective of the second camera that took the second two-dimensional photographic image is represented by a second set of camera parameters for the second two-dimensional photographic image, wherein the first set of camera parameters includes at least a first focal length associated with the first two-dimensional photographic image and a first capture location at which the first two-dimensional photographic image was captured, wherein the second set of camera parameters includes at least a second focal length associated with the second two-dimensional photographic image and a second capture location at which the second two-dimensional photographic image was captured, and wherein the first capture location and the second capture location comprise locations in three-dimensional space; receiving a first user input constraint indicating that a feature of the automatically generated three-dimensional model corresponds to a first position on the first two-dimensional photographic image; receiving, by the one or more computing devices, a second user input constraint indicating that the feature of the automatically generated three-dimensional model corresponds to a second position on the second two-dimensional photographic image; determining, by the one or more computing devices, a first point in three-dimensional space by extending a first ray from the first capture location through the first position on the first two-dimensional photographic image as indicated by the first user input constraint, the first ray having the first focal length; determining, by the one or more computing devices, a second point in three-dimensional space by extending a second ray from the second capture location through the second position on the second two-dimensional photographic image as indicated by the second user input constraint, the second ray having the second focal length; and when the first point in three-dimensional space and the second point in three-dimensional space are located at a same position, altering the automatically generated three-dimensional model such that the feature is located at the same position in three-dimensional space. - View Dependent Claims (18, 19, 20, 21, 22, 23, 27)
-
-
24. (canceled)
Specification