Vehicle sensor system and method of use
First Claim
Patent Images
1. A system for facilitating autonomous, semi-autonomous, and remote operation of a vehicle, comprising:
- a first forward camera, arranged exterior to a cabin of the vehicle superior to a windshield of the vehicle and aligned along a centerline of a longitudinal axis of the cabin, that outputs a first video stream, wherein the first forward camera defines a first angular field of view (AFOV) and a first focal length (FL);
a second forward camera, arranged interior to the cabin proximal an upper portion of the windshield and aligned along the centerline, that outputs a second video stream, wherein the second forward camera defines a second AFOV narrower than the first AFOV;
a forward radar module, coupled to an exterior surface of the vehicle and defining a first ranging length greater than the first FL, that outputs a first object signature;
an onboard computing subsystem, arranged at the vehicle, comprising;
a central processing unit (CPU) that continuously processes the first video stream and outputs a first object localization dataset according to a set of explicitly programmed rules,a graphical processing unit (GPU) cluster that continuously processes the first video stream and the second video stream, in parallel to and simultaneously with the CPU, and outputs a second object localization dataset based on a trained machine-learning model, anda scoring module that generates a first comparison between the first object localization dataset and the second object localization dataset, and outputs a confidence metric based on the comparison and the first object signature, wherein the confidence metric is indicative of the usability of the first and second video streams for at least one of localization, mapping and control of the vehicle;
wherein the onboard computing subsystem controls at least one of the first forward camera, the second forward camera, and the forward radar module based on the confidence metric.
4 Assignments
0 Petitions
Accused Products
Abstract
A system and method for collecting and processing sensor data for facilitating and/or enabling autonomous, semi-autonomous, and remote operation of a vehicle, including: collecting surroundings at one or more sensors, and determining properties of the surroundings of the vehicle and/or the behavior of the vehicle based on the surroundings data at a computing system.
55 Citations
20 Claims
-
1. A system for facilitating autonomous, semi-autonomous, and remote operation of a vehicle, comprising:
-
a first forward camera, arranged exterior to a cabin of the vehicle superior to a windshield of the vehicle and aligned along a centerline of a longitudinal axis of the cabin, that outputs a first video stream, wherein the first forward camera defines a first angular field of view (AFOV) and a first focal length (FL); a second forward camera, arranged interior to the cabin proximal an upper portion of the windshield and aligned along the centerline, that outputs a second video stream, wherein the second forward camera defines a second AFOV narrower than the first AFOV; a forward radar module, coupled to an exterior surface of the vehicle and defining a first ranging length greater than the first FL, that outputs a first object signature; an onboard computing subsystem, arranged at the vehicle, comprising; a central processing unit (CPU) that continuously processes the first video stream and outputs a first object localization dataset according to a set of explicitly programmed rules, a graphical processing unit (GPU) cluster that continuously processes the first video stream and the second video stream, in parallel to and simultaneously with the CPU, and outputs a second object localization dataset based on a trained machine-learning model, and a scoring module that generates a first comparison between the first object localization dataset and the second object localization dataset, and outputs a confidence metric based on the comparison and the first object signature, wherein the confidence metric is indicative of the usability of the first and second video streams for at least one of localization, mapping and control of the vehicle; wherein the onboard computing subsystem controls at least one of the first forward camera, the second forward camera, and the forward radar module based on the confidence metric. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method for facilitating autonomous, semi-autonomous, and remote operation of a vehicle, comprising:
-
collecting a first video stream at a first camera arranged external to a cabin of the vehicle, the first camera comprising a first angular field of view (AFOV) oriented toward a forward direction relative to the vehicle; collecting a second video stream at a second camera arranged within the cabin along a longitudinal centerline of the cabin, the second camera comprising a second AFOV oriented toward the forward direction, wherein the second AFOV is narrower than the first AFOV; processing the first video stream at a central processing unit (CPU) arranged within the cabin to extract a first object localization dataset according to an explicitly programmed set of rules; processing a combination of the first and second video stream at a graphical processing unit (GPU) cluster arranged within the cabin to extract a second object localization dataset according to a trained machine-learning model, simultaneously with processing the first video stream; generating a comparison between the first object localization dataset and the second object localization dataset; generating a confidence metric based on the comparison, wherein the confidence metric is indicative of the usability of the first and second video streams for at least one of localization, mapping and control of the vehicle; and controlling at least one of the first camera and the second camera based on the confidence metric. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification