3D visualization of light detection and ranging data
First Claim
1. A method comprising:
- receiving light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area;
receiving a selection of a first perspective for presenting a three-dimensional image;
generating the three-dimensional image of the geographic area based on the plurality of data points of the LIDAR data arranged within a three-dimensional space and the selection of the first perspective for presenting the three-dimensional image;
presenting at least a first portion of the three-dimensional image to a user based on the selection of the first perspective and a viewing area of a camera arranged to correspond to the user positioned at a first location in the geographic area, the first portion of the three-dimensional image presented from the selection of the first perspective;
navigating the three-dimensional image based on a first input received from the user, the first input directing the camera to move along a path in the selection of the first perspective to simulate movement of the user in the geographic area based on the first input and the three-dimensional image; and
presenting at least a second portion of the three-dimensional image to the user based on the selection of a second perspective and navigating the camera to a second location in the geographic area, the second portion of the three dimensional image presented from the selection of the second perspective to simulate movement of the user to the second location in the geographic area as seen from the second perspective.
1 Assignment
0 Petitions
Accused Products
Abstract
In accordance with particular embodiments, a method includes receiving LIDAR data associated with a geographic area and generating a three-dimensional image of the geographic area based on the LIDAR data. The method further includes presenting at least a first portion of the three-dimensional image to a user based on a camera at a first location. The first portion of the three-dimensional image is presented from a walking perspective. The method also includes navigating the three-dimensional image based on a first input received from the user. The first input is used to direct the camera to move along a path in the walking perspective based on the first input and the three-dimensional image. The method further includes presenting at least a second portion of the three-dimensional image to the user based on navigating the camera to a second location. The second portion of the three dimensional image presented from the walking perspective.
66 Citations
21 Claims
-
1. A method comprising:
-
receiving light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area;
receiving a selection of a first perspective for presenting a three-dimensional image;
generating the three-dimensional image of the geographic area based on the plurality of data points of the LIDAR data arranged within a three-dimensional space and the selection of the first perspective for presenting the three-dimensional image;
presenting at least a first portion of the three-dimensional image to a user based on the selection of the first perspective and a viewing area of a camera arranged to correspond to the user positioned at a first location in the geographic area, the first portion of the three-dimensional image presented from the selection of the first perspective;
navigating the three-dimensional image based on a first input received from the user, the first input directing the camera to move along a path in the selection of the first perspective to simulate movement of the user in the geographic area based on the first input and the three-dimensional image; andpresenting at least a second portion of the three-dimensional image to the user based on the selection of a second perspective and navigating the camera to a second location in the geographic area, the second portion of the three dimensional image presented from the selection of the second perspective to simulate movement of the user to the second location in the geographic area as seen from the second perspective. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system comprising:
-
an interface configured to receive light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area and to receive a selection of a first perspective for presenting a three-dimensional image; and a processor coupled to the interface and configured to; generate the three-dimensional image of the geographic area based on the plurality of data points of the LIDAR data arranged within a three-dimensional space and the selection of the first perspective for presenting the three-dimensional image; present at least a first portion of the three-dimensional image to a user based on the selection of the first perspective and a viewing area of a camera arranged to correspond to the user positioned at a first location in the geographic area, the first portion of the three-dimensional image presented from the selection of the first perspective;
navigate the three-dimensional image based on a first input received from the user the first input directing the camera to move along a path in the selection of the first perspective to simulate movement of the user in the geographic area based on the first input and the three-dimensional image; andpresent at least a second portion of the three-dimensional image to the user based on the selection of a second perspective and navigating the camera to a second location in the geographic area, the second portion of the three dimensional image presented from the selection of the second perspective to simulate movement of the user to the second location in the geographic area as seen from the second perspective. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer readable medium comprising instructions that, when executed by a processor, cause the machine to;
-
receive light detection and ranging (LIDAR) data comprising a plurality of data points created from reflections from a LIDAR system, the plurality of data points arranged within a three-dimensional space associated with a geographic area;
receiving a selection of a first perspective for presenting a three-dimensional image;generate the three-dimensional image of the geographic area based on the plurality of data points of the LIDAR data arranged within a three-dimensional space and the selection of the perspective for presenting the three-dimensional image; present at least a first portion of the three-dimensional image to a user based on the selection of the first perspective and a viewing area of a camera arranged to correspond to the user positioned at a first location in the geographic area, the first portion of the three-dimensional image presented from the selection of the first perspective; navigate the three-dimensional image based on a first input received from the user, the first input directing the camera to move along a path in the selection of the first perspective to simulate movement of the user in the geographic area based on the first input and the three-dimensional image; and present at least a second portion of the three-dimensional image to the user based on the selection of a second perspective and navigating the camera to a second location in the geographic area, the second portion of the three dimensional image presented from the selection of the second perspective to simulate movement of the user to the second location in the geographic area as seen from the second perspective. - View Dependent Claims (16, 17, 18, 19, 20, 21)
-
Specification