Sensor fusion of camera and V2V data for vehicles
First Claim
1. A method for fusing sensor information detected by a host vehicle and at least one remote vehicle-to-vehicle (V2V) communication equipped vehicle, the method comprising:
- collecting visual data from an optical sensor of a vision sub-system, and generating a base lane model and a base confidence level from the visual data;
collecting V2V data from a receiver of a V2V sub-system;
fusing together the V2V data, the base lane model, and the base confidence level;
generating a final lane model with a final confidence level from the fused together V2V data, the base lane model and the base confidence level;
assigning a priority to the final lane model; and
determining a location of an object in the final lane model relative to the host vehicle and assigning a high priority to the object when the object is in a lane also occupied by the host vehicle, and sending a command to at least one advanced driver assistance system (ADAS), and wherein the at least one ADAS performs a function to avoid the object to which a high priority has been assigned.
7 Assignments
0 Petitions
Accused Products
Abstract
A method for fusing sensor information detected by a host vehicle and at least one remote vehicle-to-vehicle (V2V) communication equipped vehicle includes collecting visual data from an optical sensor of a vision sub-system, and collecting V2V data from remote vehicles. The method further includes executing a control logic including a first control logic for generating a base lane model and a base confidence level, a second control logic that fuses together the V2V data, the base lane model, and the base confidence level, and a third control logic that generates from the fused lane model, the V2V data, the base lane model, and the base confidence level, a final lane model and final confidence level, and assigns a priority to the final lane model.
28 Citations
10 Claims
-
1. A method for fusing sensor information detected by a host vehicle and at least one remote vehicle-to-vehicle (V2V) communication equipped vehicle, the method comprising:
-
collecting visual data from an optical sensor of a vision sub-system, and generating a base lane model and a base confidence level from the visual data; collecting V2V data from a receiver of a V2V sub-system; fusing together the V2V data, the base lane model, and the base confidence level; generating a final lane model with a final confidence level from the fused together V2V data, the base lane model and the base confidence level; assigning a priority to the final lane model; and determining a location of an object in the final lane model relative to the host vehicle and assigning a high priority to the object when the object is in a lane also occupied by the host vehicle, and sending a command to at least one advanced driver assistance system (ADAS), and wherein the at least one ADAS performs a function to avoid the object to which a high priority has been assigned. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A system for fusing sensor information detected by a host vehicle and at least one remote vehicle-to-vehicle (V2V) communication equipped vehicle, the system comprising:
-
a vision sub-system having an optical sensor; a V2V sub-system having a receiver; a controller in communication with the vision sub-system and the V2V sub-system, the controller having memory for storing control logic and a processor configured to execute the control logic, the control logic including a first control logic for collecting visual data from the vision sub-system, and from the visual data generating a base lane model and a base confidence level; the processor including a second control logic for collecting V2V data from the V2V sub-system, and for fusing together the V2V data and the base lane model and base confidence level; the processor including a third control logic for generating, from the fused V2V data, base lane model and base confidence level, a final lane model with a final confidence level; the processor including a fourth logic for assigning a priority to the final lane model; and determining a location of an object in the final lane model relative to the host vehicle and assigning a high priority to the object when the object is in a lane also occupied by the host vehicle, wherein information about the object that has been assigned a high priority is passed to the at least one ADAS, and the at least one ADAS performs a function to avoid the object. - View Dependent Claims (7, 8, 9, 10)
-
Specification