Methods, apparatus, and systems for super resolution of LIDAR data sets
First Claim
Patent Images
1. A light detection and ranging (LIDAR) imaging method comprising:
- obtaining data sets of cloud points representing each of a plurality of views of an object, the plurality of views having at least one view shift;
enhancing the plurality of views by duplicating cloud points within each of the obtained data sets;
compensating for the at least one view shift using the enhanced plurality of views by;
shifting one of the enhanced plurality of views with respect to an other one of the enhanced plurality of views by a plurality of different shift amounts,determining focus measurements between the one of the enhanced plurality of views and the other one of the enhanced plurality of views for each of the plurality of different shift amounts,registering a compensation shift amount from the plurality of different shift amounts corresponding to a maximum one of the determined focus measurements, andshifting the one of the enhanced plurality of views by the registered compensation shift amount;
identifying valid cloud points; and
generating a super-resolved image of the object from the compensated, enhanced plurality of views with a processor by integrating the valid cloud points located at each of a plurality of spatial coordinates within the compensated, enhanced plurality of views.
3 Assignments
0 Petitions
Accused Products
Abstract
Light detection and ranging (LIDAR) imaging systems, method, and computer readable media for generating super-resolved images are described. Super-resolved images are generated by obtaining data sets of cloud points representing multiple views of an object where the views have a view shift, enhancing the views by duplicating cloud points within each of the data sets, compensating for the view shift using the enhanced views, identifying valid cloud points, and generating a super-resolved image of the object by integrating valid cloud points within the compensated, enhanced views.
42 Citations
19 Claims
-
1. A light detection and ranging (LIDAR) imaging method comprising:
-
obtaining data sets of cloud points representing each of a plurality of views of an object, the plurality of views having at least one view shift; enhancing the plurality of views by duplicating cloud points within each of the obtained data sets; compensating for the at least one view shift using the enhanced plurality of views by; shifting one of the enhanced plurality of views with respect to an other one of the enhanced plurality of views by a plurality of different shift amounts, determining focus measurements between the one of the enhanced plurality of views and the other one of the enhanced plurality of views for each of the plurality of different shift amounts, registering a compensation shift amount from the plurality of different shift amounts corresponding to a maximum one of the determined focus measurements, and shifting the one of the enhanced plurality of views by the registered compensation shift amount; identifying valid cloud points; and generating a super-resolved image of the object from the compensated, enhanced plurality of views with a processor by integrating the valid cloud points located at each of a plurality of spatial coordinates within the compensated, enhanced plurality of views. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A light detection and ranging (LIDAR) imaging system comprising:
-
a memory including data sets of cloud points representing each of a plurality of views of an object, the plurality of views having at least one view shift; a display; and a processor coupled to the memory and the display, the processor configured to obtain the data sets from the memory, enhance the plurality of views by duplicating cloud points within each of the obtained data sets, compensate for the at least one view shift using the enhanced plurality of views, identify valid cloud points, generate a super-resolved image of the object from the compensated, enhanced plurality of views with a processor by integrating the valid cloud points located at each of a plurality of spatial coordinates within the compensated, enhanced plurality of views, and present the super-resolved image on the display; wherein the processor is configured to shift one of the enhanced plurality of views with respect to another one of the enhanced plurality of views by a plurality of different shift amounts, determine focus measurements between the one of the enhanced plurality of views and the other one of the enhanced plurality of views for each of the plurality of different shift amounts, register a compensation shift amount from the plurality of different shift amounts corresponding to a maximum one of the determined focus measurements; and
shift the one of the enhanced plurality of views by the registered compensation shift amount in order to compensate for the at least one view shift. - View Dependent Claims (13, 14, 15)
-
-
16. A non-transitory computer readable medium including instructions for configuring a computer to perform a light detection and ranging (LIDAR) imaging method, the method comprising:
-
obtaining data sets of cloud points representing each of a plurality of views of an object, the plurality of views having at least one view shift; enhancing the plurality of views by duplicating cloud points within each of the obtained data sets; compensating for the at least one view shift using the enhanced plurality of views by; shifting one of the enhanced plurality of views with respect to an other one of the enhanced plurality of views by a plurality of different shift amounts, determining focus measurements between the one of the enhanced plurality of views and the other one of the enhanced plurality of views for each of the plurality of different shift amounts, registering a compensation shift amount from the plurality of different shift amounts corresponding to a maximum one of the determined focus measurements, and shifting the one of the enhanced plurality of views by the registered compensation shift amount, identifying valid cloud points; and generating a super-resolved image of the object from the compensated, enhanced plurality of views with a processor by integrating the valid cloud points located at each of a plurality of spatial coordinates within the compensated, enhanced plurality of views. - View Dependent Claims (17, 18, 19)
-
Specification