System and method for structural inspection and construction estimation using an unmanned aerial vehicle
First Claim
1. An image and information capturing and processing system, comprising:
- a mobile computing device configured to;
receive user input data and/or third party data at the mobile computing device;
create unmanned aerial vehicle control data based at least in part on the user input data and/or the third party data;
create a flight plan based at least in part on the unmanned aerial vehicle control data comprising a generally crude outline of a structure area of interest to insure images and data capturing are taken at optimal distances and intervals for three-dimensional reconstruction and visual inspection;
transmit the flight plan to an unmanned aerial vehicle via a communication link;
execute the flight plan at least in part by issuing commands to flight and camera controllers of the unmanned aerial vehicle, wherein the commands comprise an orbit at calculated ranges with a specified minimum depression angle to insure complete image coverage of the structure area of interest from each perspective, omnidirectional orbital imaging capable of reducing obstructions for inspection and three-dimensional reconstruction of the structure of interest in order to allow three-dimensional point cloud reconstruction;
receive unmanned aerial vehicle output data from the unmanned aerial vehicle; and
transmit the unmanned aerial vehicle output data to a server via a wireless or wired communication link,wherein the unmanned aerial vehicle output data comprises highly redundant imagery with full generality of structural shape, height, obstructions, and operator error, which generally requires no topographical aerial image data, that can be used to generate a three-dimensional structural reconstruction that is accurately scaled in three dimensions with less than one-percent systematic relative error with or without GPS/GNSS;
wherein the unmanned aerial vehicle output data comprises information used to create a point cloud density over an entire structure area of interest, which is substantially uniform ranging from 100-10,000 points per square meter while retaining at least two centimeters vertical precision, that can be converted into a regularized vector model of the structure area of interest.
2 Assignments
0 Petitions
Accused Products
Abstract
An automated image capturing and processing system and method may allow a field user to operate a UAV via a mobile computing device to capture images of a structure area of interest (AOI). The mobile computing device receives user and/or third party data and creates UAV control data and a flight plan. The mobile computing device executes a flight plan by issuing commands to the UAV'"'"'s flight and camera controller that allows for complete coverage of the structure AOI.
After data acquisition, the mobile computing device then transmits the UAV output data to a server for further processing. At the server, the UAV output data can be used for a three-dimensional reconstruction process. The server then generates a vector model from the images that precisely represents the dimensions of the structure. The server can then generate a report for inspection and construction estimation.
-
Citations
20 Claims
-
1. An image and information capturing and processing system, comprising:
-
a mobile computing device configured to; receive user input data and/or third party data at the mobile computing device; create unmanned aerial vehicle control data based at least in part on the user input data and/or the third party data; create a flight plan based at least in part on the unmanned aerial vehicle control data comprising a generally crude outline of a structure area of interest to insure images and data capturing are taken at optimal distances and intervals for three-dimensional reconstruction and visual inspection; transmit the flight plan to an unmanned aerial vehicle via a communication link; execute the flight plan at least in part by issuing commands to flight and camera controllers of the unmanned aerial vehicle, wherein the commands comprise an orbit at calculated ranges with a specified minimum depression angle to insure complete image coverage of the structure area of interest from each perspective, omnidirectional orbital imaging capable of reducing obstructions for inspection and three-dimensional reconstruction of the structure of interest in order to allow three-dimensional point cloud reconstruction; receive unmanned aerial vehicle output data from the unmanned aerial vehicle; and transmit the unmanned aerial vehicle output data to a server via a wireless or wired communication link, wherein the unmanned aerial vehicle output data comprises highly redundant imagery with full generality of structural shape, height, obstructions, and operator error, which generally requires no topographical aerial image data, that can be used to generate a three-dimensional structural reconstruction that is accurately scaled in three dimensions with less than one-percent systematic relative error with or without GPS/GNSS; wherein the unmanned aerial vehicle output data comprises information used to create a point cloud density over an entire structure area of interest, which is substantially uniform ranging from 100-10,000 points per square meter while retaining at least two centimeters vertical precision, that can be converted into a regularized vector model of the structure area of interest. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A method of capturing and processing automated images comprising:
-
receiving the transmitted unmanned aerial vehicle output data by a server, wherein the unmanned aerial vehicle output data comprises reliable geotagged images of a structure area of interest and global positioning system information, comprising highly redundant imagery with full generality of structural shape, height, obstructions, and operator errors, which requires no a priori topographic or aerial image data, that can be used to generate a regularized vector model accurately scaled in three dimensions with less than one percent systematic relative error; storing the modified unmanned aerial vehicle data in an image database on the server; generating a three-dimensional photogrammetric point cloud density by the server over the entire structure area of interest, which is substantially uniform ranging from 100-10,000 points per square meter while retaining at least two centimeters vertical precision, that can be converted into a regularized vector model of the structure area of interest, wherein the three-dimensional photogrammetric point cloud density is generated based at least in part on the received modified unmanned aerial vehicle data; reconstructing surface points using dense matching algorithms on a server to correlate neighboring images with a sufficiently low angular separation and minimal point cloud voids, even with limited surface texture, which allows three-dimensional reconstruction and image inspection to be complete despite obstructions occluding portions of the structural area of interest in at least some of the fully redundant set geotagged images; and creating three-dimensional regularized vector models using the point cloud model and regularization algorithms. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
-
20. A non-transitory computer readable medium with instructions stored thereon which, if executed by a processor, causes the processor to:
-
receive the transmitted unmanned aerial vehicle output data by a server, wherein the unmanned aerial vehicle output data comprises reliable geotagged images of an area of interest and global positioning system information, comprising highly redundant imagery with full generality of structural shape, height, obstructions, and operator errors, which requires no a priori topographic or aerial image data, that can be used to generate a reconstruction accurately scaled in three dimensions with less than one percent systematic relative error; store the modified unmanned aerial vehicle data in an image database on the server; generate a three-dimensional photogrammetric point cloud by the server over the entire area of interest, which is substantially uniform ranging from 100-10,000 points per square meter while retaining at least two centimeters vertical precision, that can be converted into a regularized vector model of the area of interest, wherein the three-dimensional photogrammetric point cloud density is generated based at least in part on the received modified unmanned aerial vehicle data; reconstruct surfaces of the area of interest using dense matching algorithms on a server to correlate neighboring images with a sufficiently low angular separation and minimal point cloud voids, even with limited surface texture, which allows three-dimensional reconstruction and image inspection to be complete despite obstructions occluding portions of the structural area of interest in at least some of the geotagged images; and create three-dimensional regularized vector models using the point cloud model and regularization algorithms.
-
Specification