METHOD FOR RENDERING 2D AND 3D DATA WITHIN A 3D VIRTUAL ENVIRONMENT
First Claim
1. A method for augmenting 3D depth map data with 2D color image data comprising:
- accessing a first 2D color image recorded at a first time via a 2D color camera arranged on an autonomous vehicle;
accessing a first 3D point cloud recorded at approximately the first time via a 3D depth sensor arranged on the autonomous vehicle, the 3D depth sensor and the 2D color camera defining intersecting fields of view and facing outwardly from the autonomous vehicle;
detecting a first cluster of points in the first 3D point cloud representing a first continuous surface approximating a first plane;
isolating a first cluster of color pixels in the first 2D color image depicting the first continuous surface;
projecting the first cluster of color pixels onto the first plane to define a first set of synthetic 3D color points in the first 3D point cloud, the first cluster of points and the first set of synthetic 3D color points representing the first continuous surface; and
rendering points in the first 3D point cloud and the first set of synthetic 3D color points on a display.
1 Assignment
0 Petitions
Accused Products
Abstract
One variation of a method includes: accessing a 2D color image recorded by a 2D color camera and a 3D point cloud recorded by a 3D depth sensor at approximately a first time, the 2D color camera and the 3D depth sensor defining intersecting fields of view and facing outwardly from an autonomous vehicle; detecting a cluster of points in the 3D point cloud representing a continuous surface approximating a plane; isolating a cluster of color pixels in the 2D color image depicting the continuous surface; projecting the cluster of color pixels onto the plane to define a set of synthetic 3D color points in the 3D point cloud, the cluster of points and the set of synthetic 3D color points representing the continuous surface; and rendering points in the 3D point cloud and the set of synthetic 3D color points on a display.
97 Citations
20 Claims
-
1. A method for augmenting 3D depth map data with 2D color image data comprising:
-
accessing a first 2D color image recorded at a first time via a 2D color camera arranged on an autonomous vehicle; accessing a first 3D point cloud recorded at approximately the first time via a 3D depth sensor arranged on the autonomous vehicle, the 3D depth sensor and the 2D color camera defining intersecting fields of view and facing outwardly from the autonomous vehicle; detecting a first cluster of points in the first 3D point cloud representing a first continuous surface approximating a first plane; isolating a first cluster of color pixels in the first 2D color image depicting the first continuous surface; projecting the first cluster of color pixels onto the first plane to define a first set of synthetic 3D color points in the first 3D point cloud, the first cluster of points and the first set of synthetic 3D color points representing the first continuous surface; and rendering points in the first 3D point cloud and the first set of synthetic 3D color points on a display. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17)
-
-
18. A method for augmenting 3D depth map data with 2D color image data comprising:
-
accessing a first 2D color image recorded at a first time via a 2D color camera arranged on an autonomous vehicle; accessing a first 3D point cloud recorded at approximately the first time via a 3D depth sensor arranged on the autonomous vehicle, the 3D depth sensor and the 2D color camera defining intersecting fields of view and facing outwardly from the autonomous vehicle; detecting a first cluster of points in the first 3D point cloud representing a first continuous surface approximating a first plane; isolating a first cluster of color pixels in the first 2D color image depicting the first continuous surface; projecting the first cluster of color pixels onto the first plane to define a first set of synthetic 3D color points; compiling the first cluster of points and the first set of synthetic 3D color points, representing the first continuous surface, into a first 3D frame; detecting characteristics of an object, comprising the continuous surface, based on the first cluster of points and the first set of synthetic 3D color points in the first 3D frame; based on characteristics of the object, electing a next navigational action; and autonomously executing the next navigational action. - View Dependent Claims (19, 20)
-
Specification