High-precision multi-layer visual and semantic map by autonomous units
First Claim
1. A method for automatically building by a first autonomous unit multi-layer maps of roads for other autonomous units, wherein the first autonomous unit includes at least a quad camera visual sensor and at least one selected from a global positioning system and an inertial measurement unit, the method including:
- receiving a proto-roadmap including only roads;
in the autonomous unit, capturing using four cameras of the quad camera visual sensor, a set of keyrigs, each keyrig is a set of quad images with a pose generated using combinations of global positioning system, inertial measurement unit, and visual information of a scene as captured by the quad camera visual sensor of the first autonomous unit during travel along one of the roads in the proto-roadmap;
determining a ground perspective view, the ground perspective view including at least road marking information for at least one of the roads in the proto-roadmap from the visual information captured;
determining a spatial perspective view, the spatial perspective view including objects along at least one of the roads in the proto-roadmap from the visual information captured;
classifying objects from the spatial perspective view into moving objects and non-moving objects;
building at least one multi-layer map including a stationary portion comprising of the proto-roadmap, the non-moving objects from the spatial perspective view and the road markings from the ground perspective view, wherein at least one multi-layer map is accurate within centimeters; and
providing the multi-layer map via a communications link to a map server that stores and distributes multi-layer maps to guide the autonomous unit at a future time and at least one other autonomous unit.
3 Assignments
0 Petitions
Accused Products
Abstract
Roughly described, a three-dimensional, multi-layer map is built employing sensory data gathering and analysis. The sensory data are gathered from multiple operational cameras and one or more auxiliary sensors. The multi-layer maps are stored in a map stored to be distributed to one or more autonomous vehicles and robots in the future. The techniques herein are described with reference to specific example implementations to implement improvements in navigation in autonomous vehicles and robots.
58 Citations
18 Claims
-
1. A method for automatically building by a first autonomous unit multi-layer maps of roads for other autonomous units, wherein the first autonomous unit includes at least a quad camera visual sensor and at least one selected from a global positioning system and an inertial measurement unit, the method including:
-
receiving a proto-roadmap including only roads; in the autonomous unit, capturing using four cameras of the quad camera visual sensor, a set of keyrigs, each keyrig is a set of quad images with a pose generated using combinations of global positioning system, inertial measurement unit, and visual information of a scene as captured by the quad camera visual sensor of the first autonomous unit during travel along one of the roads in the proto-roadmap; determining a ground perspective view, the ground perspective view including at least road marking information for at least one of the roads in the proto-roadmap from the visual information captured; determining a spatial perspective view, the spatial perspective view including objects along at least one of the roads in the proto-roadmap from the visual information captured; classifying objects from the spatial perspective view into moving objects and non-moving objects; building at least one multi-layer map including a stationary portion comprising of the proto-roadmap, the non-moving objects from the spatial perspective view and the road markings from the ground perspective view, wherein at least one multi-layer map is accurate within centimeters; and providing the multi-layer map via a communications link to a map server that stores and distributes multi-layer maps to guide the autonomous unit at a future time and at least one other autonomous unit. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A system, including:
-
a map server to store multi-layer maps of roads for autonomous units using information sourced by one or more autonomous units; and one or more autonomous units, including a first autonomous unit, each autonomous unit including at least a quad camera visual sensor and at least one selected from a global positioning system and an inertial measurement unit; and
each autonomous unit configured to;receiving a proto-roadmap including only roads; in the autonomous unit, capturing using four cameras of the quad camera visual sensor, a set of keyrigs, each keyrig is a set of quad images with a pose generated using combinations of global positioning system, inertial measurement unit, and visual information of a scene as captured by the quad camera visual sensor of the first autonomous unit during travel along one of the roads in the proto-roadmap; determining a ground perspective view including at least road marking information for at least one of the roads in the proto-roadmap from the visual information captured; determining a spatial perspective view including objects along at least one of the roads in the proto-roadmap from the visual information captured; classifying objects from the spatial perspective view into moving objects and non-moving objects; building at least one multi-layer map including a stationary portion comprising of the proto-roadmap, the non-moving objects from the spatial perspective view and the road markings from the ground perspective view, wherein at least one multi-layer map is accurate within centimeters; and providing the multi-layer map via a communications link to a map server that stores and distributes multi-layer maps to guide the autonomous unit at a future time and at least one other autonomous unit.
-
-
18. A non-transitory computer readable medium storing instructions for automatically building by an autonomous unit multi-layer maps of roads for other autonomous units, which instructions when executed by a processor perform:
-
receiving a proto-roadmap including only roads; in the autonomous unit, capturing using four cameras of a quad camera visual sensor, a set of keyrigs, each keyrig is a set of quad images with a pose generated using combinations of global positioning system, inertial measurement unit, and visual information of a scene as captured by the quad camera visual sensor of the autonomous unit during travel along one of the roads in the proto-roadmap; determining a ground perspective view including at least road marking information for at least one of the roads in the proto-roadmap from the visual information captured; determining a spatial perspective view including objects along at least one of the roads in the proto-roadmap from the visual information captured; classifying objects from the spatial perspective view into moving objects and non-moving objects; building at least one multi-layer map including a stationary portion comprising of the proto-roadmap, the non-moving objects from the spatial perspective view and the road markings from the ground perspective view, wherein at least one multi-layer map is accurate within centimeters; and providing the multi-layer map via a communications link to a map server that stores and distributes multi-layer maps to guide the autonomous unit at a future time and at least one other autonomous unit.
-
Specification