DRIVING SKILL RECOGNITION BASED ON STOPANDGO DRIVING BEHAVIOR

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
48Forward
Citations 
0
Petitions 
7
Assignments
First Claim
1. A method for determining a vehicle driver's driving skill based on driver stop and go driving behavior, said method comprising:
 determining whether the vehicle is in a braking maneuver;
determining a vehicle longitudinal deceleration if the vehicle is in the braking maneuver;
determining a brake pedal position if the vehicle is in the braking maneuver;
determining a brake pedal rate from the brake pedal position; and
determining the driver's driving skill based on the brake pedal rate, the brake pedal position and the vehicle longitudinal deceleration.
7 Assignments
0 Petitions
Accused Products
Abstract
A skill characterization processor classifies driver skill based stop and go maneuvers. A maneuver identification processor determines whether the vehicle is in a braking maneuver, and the system determines the vehicle longitudinal deceleration, the brake pedal position and the brake pedal rate from the brake pedal position. The skill characterization processor then classifies the driver's driving skill based on the brake pedal rate, the brake pedal position and the vehicle longitudinal deceleration generally under normal driving conditions. In one embodiment, the processor classifies driver skill by performing frequency analysis on the brake pedal rate using a discrete Fourier transform to find a frequency component of the brake pedal rate to obtain a power spectrum density.
76 Citations
View as Search Results
Using telematics data including position data and vehicle analytics to train drivers to improve efficiency of vehicle use  
Patent #
US 10,223,935 B2
Filed 07/24/2018

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

System and method for driver distraction determination  
Patent #
US 10,246,014 B2
Filed 11/07/2017

Current Assignee
Nauto Inc.

Sponsoring Entity
Nauto Inc.

Method and apparatus for matching vehicle ECU programming to current vehicle operating conditions  
Patent #
US 10,241,966 B2
Filed 07/21/2017

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Method and apparatus for matching vehicle ECU programming to current vehicle operating conditions  
Patent #
US 10,289,651 B2
Filed 07/21/2017

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Apparatus and method for determining driving state  
Patent #
US 10,279,814 B2
Filed 05/25/2017

Current Assignee
Kia Motors Corporation, Hyundai Motor Company

Sponsoring Entity
Kia Motors Corporation, Hyundai Motor Company

Systems and methods for classifying driver skill level  
Patent #
US 10,029,697 B1
Filed 01/23/2017

Current Assignee
GM Global Technology Operations LLC

Sponsoring Entity
GM Global Technology Operations LLC

Systems and methods for classifying driver skill level and handling type  
Patent #
US 10,124,807 B2
Filed 01/23/2017

Current Assignee
GM Global Technology Operations LLC

Sponsoring Entity
GM Global Technology Operations LLC

Controlling transmissions of vehicle operation information  
Patent #
US 10,053,108 B2
Filed 12/05/2016

Current Assignee
XL Hybrids Inc.

Sponsoring Entity
XL Hybrids Inc.

Controlling Transmissions of Vehicle Operation Information  
Patent #
US 20170174222A1
Filed 12/05/2016

Current Assignee
XL Hybrids Inc.

Sponsoring Entity
XL Hybrids Inc.

Method and apparatus for changing vehicle behavior based on current vehicle location and zone definitions created by a remote user  
Patent #
US 10,099,706 B2
Filed 11/18/2016

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Motion analysis unit  
Patent #
US 9,846,011 B1
Filed 10/21/2016

Current Assignee
Raytheon Company

Sponsoring Entity
Raytheon Company

Method and apparatus for GPS based Zaxis difference parameter computation  
Patent #
US 10,102,096 B2
Filed 07/01/2016

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Vehicle performance based on analysis of drive data  
Patent #
US 9,527,515 B2
Filed 01/25/2016

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

COMMUNICATION OF CLOUDBASED CONTENT TO A DRIVER  
Patent #
US 20160063005A1
Filed 08/27/2014

Current Assignee
Ibaraki Toyota Jidosha Kabushiki Kaisha

Sponsoring Entity
Ibaraki Toyota Jidosha Kabushiki Kaisha

DRIVING DIAGNOSIS DEVICE, DRIVING DIAGNOSIS SYSTEM AND DRIVING DIAGNOSIS METHOD  
Patent #
US 20140365070A1
Filed 06/03/2014

Current Assignee
Fujitsu Limited

Sponsoring Entity
Fujitsu Limited

Driving diagnosis device, driving diagnosis system and driving diagnosis method  
Patent #
US 9,430,886 B2
Filed 06/03/2014

Current Assignee
Fujitsu Limited

Sponsoring Entity
Fujitsu Limited

Automatic incorporation of vehicle data into documents captured at a vehicle using a mobile computing device  
Patent #
US 9,563,869 B2
Filed 05/27/2014

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

VEHICLE AND METHOD OF TUNING PERFORMANCE OF SAME  
Patent #
US 20140236385A1
Filed 04/28/2014

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity
Ford Global Technologies LLC

Vehicle and method of tuning performance of same  
Patent #
US 9,045,145 B2
Filed 04/28/2014

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity
Ford Global Technologies LLC

SYSTEM AND METHOD FOR REDUCING DRIVING SKILL ATROPHY  
Patent #
US 20140222245A1
Filed 04/09/2014

Current Assignee
Honda Motor Company

Sponsoring Entity
Honda Motor Company

System and method for reducing driving skill atrophy  
Patent #
US 9,174,652 B2
Filed 04/09/2014

Current Assignee
Honda Motor Company

Sponsoring Entity
Honda Motor Company

Using telematics data including position data and vehicle analytics to train drivers to improve efficiency of vehicle use  
Patent #
US 10,056,008 B1
Filed 03/14/2014

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Controlling transmissions of vehicle operation information  
Patent #
US 9,511,778 B1
Filed 02/12/2014

Current Assignee
XL Hybrids Inc.

Sponsoring Entity
XL Hybrids Inc.

System, method, and computerreadable recording medium for lane keeping control  
Patent #
US 9,469,343 B2
Filed 01/30/2014

Current Assignee
Mando Corporation

Sponsoring Entity
Mando Corporation

VEHICLE DRIVE ASSIST SYSTEM, AND DRIVE ASSIST IMPLEMENTATION METHOD  
Patent #
US 20160003630A1
Filed 01/10/2014

Current Assignee
DENSO Corporation

Sponsoring Entity
DENSO Corporation

Vehicle drive assist system, and drive assist implementation method  
Patent #
US 9,638,532 B2
Filed 01/10/2014

Current Assignee
DENSO Corporation

Sponsoring Entity
DENSO Corporation

METHOD AND SYSTEM FOR MONITORING ROAD CONDITIONS  
Patent #
US 20150260614A1
Filed 10/18/2013

Current Assignee
Roadroid AB

Sponsoring Entity
Roadroid AB

Mobile computing device for fleet telematics  
Patent #
US 10,185,455 B2
Filed 10/04/2013

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Optical neuroinformatics  
Patent #
US 9,380,976 B2
Filed 03/11/2013

Current Assignee
Sync Think Inc.

Sponsoring Entity
Sync Think Inc.

Motor vehicle and method of control of a motor vehicle  
Patent #
US 10,054,065 B2
Filed 01/24/2013

Current Assignee
Jaguar Land Rover Automotive PLC

Sponsoring Entity
Jaguar Land Rover Automotive PLC

MOTOR VEHICLE AND METHOD OF CONTROL OF A MOTOR VEHICLE  
Patent #
US 20150006064A1
Filed 01/24/2013

Current Assignee
Jaguar Land Rover Automotive PLC

Sponsoring Entity
Jaguar Land Rover Automotive PLC

METHOD AND APPARATUS FOR 3D ACCELEROMTER BASED SLOPE DETERMINATION, REALTIME VEHICLE MASS DETERMINATION, AND VEHICLE EFFICIENCY ANALYSIS  
Patent #
US 20130184964A1
Filed 12/21/2012

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Method and apparatus for 3D acceleromter based slope determination, realtime vehicle mass determination, and vehicle efficiency analysis  
Patent #
US 9,170,913 B2
Filed 12/21/2012

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Using social networking to improve driver performance based on industry sharing of driver performance data  
Patent #
US 9,412,282 B2
Filed 12/21/2012

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Method and apparatus for 3D accelerometer based slope determination, realtime vehicle mass determination, and vehicle efficiency analysis  
Patent #
US 9,489,280 B2
Filed 12/21/2012

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Method and apparatus for GPS based slope determination, realtime vehicle mass determination, and vehicle efficiency analysis  
Patent #
US 9,384,111 B2
Filed 12/18/2012

Current Assignee
Zonar Systems Incorporated

Sponsoring Entity
Zonar Systems Incorporated

Application of smooth pursuit cognitive testing paradigms to clinical drug development  
Patent #
US 9,265,458 B2
Filed 12/04/2012

Current Assignee
Sync Think Inc.

Sponsoring Entity
Sync Think Inc.

Vehicle monitoring system with automatic driver identification  
Patent #
US 9,855,919 B2
Filed 08/09/2012

Current Assignee
Intelligent Mechatronic Systems Inc.

Sponsoring Entity
Intelligent Mechatronic Systems Inc.

SYSTEM AND METHOD FOR ADJUSTING SMOOTHNESS FOR LANE CENTERING STEERING CONTROL  
Patent #
US 20120283913A1
Filed 05/05/2011

Current Assignee
GM Global Technology Operations LLC

Sponsoring Entity
GM Global Technology Operations LLC

SYSTEM AND METHOD FOR REDUCING DRIVING SKILL ATROPHY  
Patent #
US 20120215375A1
Filed 02/22/2011

Current Assignee
Honda Motor Company

Sponsoring Entity
Honda Motor Company

System and method for reducing driving skill atrophy  
Patent #
US 8,731,736 B2
Filed 02/22/2011

Current Assignee
Honda Motor Company

Sponsoring Entity
Honda Motor Company

NAVIGATION SYSTEM HAVING MANEUVER ATTEMPT TRAINING MECHANISM AND METHOD OF OPERATION THEREOF  
Patent #
US 20120191343A1
Filed 01/20/2011

Current Assignee
TeleNav Incorporated

Sponsoring Entity
TeleNav Incorporated

Navigation system having maneuver attempt training mechanism and method of operation thereof  
Patent #
US 9,086,297 B2
Filed 01/20/2011

Current Assignee
TeleNav Incorporated

Sponsoring Entity
TeleNav Incorporated

Vehicle Movement Controller  
Patent #
US 20120209489A1
Filed 08/10/2010

Current Assignee
Hitachi Automotive Systems Limited

Sponsoring Entity
Hitachi Automotive Systems Limited

VEHICLE AND METHOD OF TUNING PERFORMANCE OF SAME  
Patent #
US 20110106381A1
Filed 10/30/2009

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity
Ford Global Technologies LLC

Vehicle and method of tuning performance of same  
Patent #
US 8,738,228 B2
Filed 10/30/2009

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity
Ford Global Technologies LLC

Vehicle and method of advising a driver therein  
Patent #
US 9,493,171 B2
Filed 10/30/2009

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity
Ford Global Technologies LLC

Vehicle and method for advising driver of same  
Patent #
US 9,707,975 B2
Filed 10/30/2009

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity
Ford Global Technologies LLC

ADAPTIVE VEHICLE CONTROL SYSTEM WITH DRIVING STYLE RECOGNITION BASED ON VEHICLE STOPPING  
Patent #
US 20100152950A1
Filed 12/15/2008

Current Assignee
GM Global Technology Operations LLC

Sponsoring Entity
GM Global Technology Operations LLC

BRAKE CONTROL SYSTEM AND METHOD  
Patent #
US 20080309156A1
Filed 09/12/2007

Current Assignee
KDS CONTROLS

Sponsoring Entity
KDS CONTROLS

Driver Input Analysis and Feedback System  
Patent #
US 20080120175A1
Filed 11/20/2006

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity


System and method for providing driving insurance  
Patent #
US 20070005404A1
Filed 06/09/2006

Current Assignee
DRIVE DIAGNOSTICS LTD.

Sponsoring Entity
DRIVE DIAGNOSTICS LTD.

HYBRID ELECTRIC VEHICLE POWERTRAIN WITH TORQUE TRANSFER CASE  
Patent #
US 20070034428A1
Filed 03/06/2006

Current Assignee
Ford Global Technologies LLC

Sponsoring Entity
Ford Global Technologies LLC

Vehicle operation control device  
Patent #
US 20060095195A1
Filed 11/03/2005

Current Assignee
DENSO Corporation

Sponsoring Entity
DENSO Corporation

Sensor Assemblies  
Patent #
US 20050192727A1
Filed 05/02/2005

Current Assignee
Automotive Technologies International Incorporated

Sponsoring Entity
Automotive Technologies International Incorporated

Method for ascertaining a critical driving behavior  
Patent #
US 20050116829A1
Filed 08/04/2004

Current Assignee
Robert Bosch GmbH

Sponsoring Entity
Robert Bosch GmbH

System and method for vehicle driver behavior analysis and evaluation  
Patent #
US 20050131597A1
Filed 07/20/2004

Current Assignee
GREENROAD DRIVING TECHNOLOGIES LTD

Sponsoring Entity
GREENROAD DRIVING TECHNOLOGIES LTD

Motor vehicle operating data collection and analysis  
Patent #
US 20040225557A1
Filed 04/27/2004

Current Assignee
Allstate Insurance Company

Sponsoring Entity


Motor vehicle operating data collection and analysis  
Patent #
US 6,931,309 B2
Filed 04/27/2004

Current Assignee
Allstate Insurance Company

Sponsoring Entity
INNOSURANCE INC.

Vehicle control apparatus, vehicle control method, and computer program  
Patent #
US 20040193347A1
Filed 03/22/2004

Current Assignee
Fujitsu Ten Limited

Sponsoring Entity
Fujitsu Ten Limited

Driving assist system for vehicle  
Patent #
US 7,006,917 B2
Filed 10/14/2003

Current Assignee
Nissan Motor Co. Ltd.

Sponsoring Entity
Nissan Motor Co. Ltd.

Vehicle drive control apparatus, vehicle drive control method and program therefor  
Patent #
US 6,968,260 B2
Filed 10/07/2003

Current Assignee
Aisin AW Corporation Limited

Sponsoring Entity
Aisin AW Corporation Limited

Driving assist system  
Patent #
US 6,832,157 B2
Filed 09/08/2003

Current Assignee
Nissan Motor Co. Ltd.

Sponsoring Entity
Nissan Motor Co. Ltd.

Driving assist system for vehicle  
Patent #
US 20030236602A1
Filed 06/17/2003

Current Assignee
Nissan Motor Co. Ltd.

Sponsoring Entity
Nissan Motor Co. Ltd.

Driving assist system for vehicle  
Patent #
US 6,982,647 B2
Filed 06/17/2003

Current Assignee
Nissan Motor Co. Ltd.

Sponsoring Entity
Nissan Motor Co. Ltd.

Method and system for vehicle operator assistance improvement  
Patent #
US 6,873,911 B2
Filed 02/03/2003

Current Assignee
Nissan Motor Co. Ltd.

Sponsoring Entity
Nissan Motor Co. Ltd.

Module for monitoring vehicle operation through onboard diagnostic port  
Patent #
US 20040083041A1
Filed 10/25/2002

Current Assignee
Davis Instruments Corporation

Sponsoring Entity
Davis Instruments Corporation

Method and apparatus for improving vehicle operator performance  
Patent #
US 20020091473A1
Filed 10/12/2001

Current Assignee
Continental Automotive Systems US Incorporated

Sponsoring Entity
Continental Automotive Systems US Incorporated

System and method for driver performance improvement  
Patent #
US 20020120374A1
Filed 10/12/2001

Current Assignee
Motorola Solutions Inc.

Sponsoring Entity
Motorola Solutions Inc.

System and method for driver performance improvement  
Patent #
US 6,909,947 B2
Filed 10/12/2001

Current Assignee
Motorola Solutions Inc.

Sponsoring Entity
Motorola Inc.

GPS vehicle collision avoidance warning and control system and method  
Patent #
US 6,487,500 B2
Filed 08/02/2001

Current Assignee
Jerome H. Lemelson, Pedersen Robert D.

Sponsoring Entity
Jerome H. Lemelson, Pedersen Robert D.

Acceleration monitoring and safety data accounting system for motor vehicles and other types of equipment  
Patent #
US 6,771,176 B2
Filed 03/15/2001

Current Assignee
Wilkerson William Jude

Sponsoring Entity
Wilkerson William Jude

Operation control system capable of analyzing driving tendency and its constituent apparatus  
Patent #
US 6,438,472 B1
Filed 10/13/2000

Current Assignee
TOKIO MARINE RISK CONSULTING CO. LTD.THE, DATA TEC CO. LTD. 50

Sponsoring Entity
TOKIO MARINE RISK CONSULTING CO. LTD.THE, DATA TEC CO. LTD. 50

GPS vehicle collision avoidance warning and control system and method  
Patent #
US 6,275,773 B1
Filed 11/08/1999

Current Assignee
Jerome H. Lemelson, Pedersen Robert D.

Sponsoring Entity
Jerome H. Lemelson, Pedersen Robert D.

GPS vehicle collision avoidance warning and control system and method  
Patent #
US 5,983,161 A
Filed 09/24/1996

Current Assignee
Jerome H. Lemelson, Pedersen Robert D.

Sponsoring Entity
Jerome H. Lemelson, Pedersen Robert D.

Load variation detector  
Patent #
US 5,675,094 A
Filed 01/11/1996

Current Assignee
SENSORTECH L.P.

Sponsoring Entity
SENSORTECH L.P.

20 Claims
 1. A method for determining a vehicle driver's driving skill based on driver stop and go driving behavior, said method comprising:
 determining whether the vehicle is in a braking maneuver;
determining a vehicle longitudinal deceleration if the vehicle is in the braking maneuver;
determining a brake pedal position if the vehicle is in the braking maneuver;
determining a brake pedal rate from the brake pedal position; and
determining the driver's driving skill based on the brake pedal rate, the brake pedal position and the vehicle longitudinal deceleration.  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
 determining whether the vehicle is in a braking maneuver;
 13. A method for determining a vehicle driver's driving skill based on driver stop and go driving behavior, said method comprising:
 determining whether the vehicle is in a normal braking maneuver;
determining a vehicle longitudinal deceleration only if the vehicle is in the normal braking maneuver;
determining a brake pedal position only if the vehicle is in the normal braking maneuver;
determining a brake pedal rate from the brake pedal position; and
determining the driver's driving skill based on the brake pedal rate, the brake pedal position and the vehicle longitudinal deceleration, wherein determining the driver's driving skill further includes determining a braking force on the vehicle, and wherein determining the driver's driving skill further includes performing frequency analysis on the brake pedal rate using a discrete Fourier transform to find a frequency component of the brake pedal rate to obtain a power spectrum density.  View Dependent Claims (14, 15, 16, 17, 18, 19, 20)
 determining whether the vehicle is in a normal braking maneuver;
1 Specification
1. Field of the Invention
This invention relates generally to an adaptive vehicle control system that provides driver skill recognition and, more particularly, to an adaptive vehicle control system that provides driver assistance by classifying driving skill based on stopandgo driving behavior.
2. Discussion of the Related Art
Driver assistance systems and vehicle active safety systems are becoming an integral part of vehicle design and development as an attempt to reduce driving stress and enhance vehicle/roadway safety. For example, adaptive cruise control (ACC) systems are known to relieve drivers from routine longitudinal vehicle control by keeping the vehicle a safe distance away from a preceding vehicle. Also, lane departure warning systems are known to alert the vehicle driver whenever the vehicle tends to depart from the traveling lane.
These systems employ various sensors and detectors that monitor vehicle parameters, and controllers that control vehicle systems, such as active front and rear wheel steering and differential braking. Although such systems have the potential to enhance driver comfort and safety, their success depends not only on their reliability, but also on driver acceptance. For example, considering an ACC system, studies have shown that although shortening headway distances between vehicles can increase traffic flow, it can also cause stress to some drivers because of the proximity to a preceding vehicle. Therefore, it may be desirable to enhance such systems by adapting the vehicle control in response to a driver's driving skill to meet the needs of different drivers.
Although modeling of humanmachine interacting dynamic behavior has been for a few decades primarily in the field of fighter pilot modeling, modeling of driver behavior is relatively new. Modeling of driver behavior is typically focused on modeling of an ideal driver, similar to the context of a welltrained fighter pilot possessing high maneuvering skills.
While the stateofart characterization of driving skill using a comprehensive model proves to be feasible, for offline simulation and controller design and refinement, it does not provide a high level of confidence particularly in response to various types of driving environment and scenarios, required for vehicle control adaptation. Apparently there are more of the driver's attributes than simply the time factor of driving skill that can effectively determine the classification of driving skill.
SUMMARY OF THE INVENTIONIn accordance with the teachings of the present invention, an adaptive vehicle control system is disclosed that classifies a driver's driving skill. The system includes a plurality of vehicle sensors that detect various vehicle parameters. A maneuver identification processor receives the sensor signals to identify a characteristic maneuver of the vehicle and provides a maneuver identifier signal of the maneuver. The system also includes a data selection processor that receives the sensor signals, the maneuver identifier signals and the traffic and road condition signals, and stores data for each of the characteristic maneuvers and the traffic and road conditions. A skill characterization processor receives the maneuver identifier signals, the stored data from the data selection processor and possibly traffic and road condition signals, and classifies driving skill based on the received signals and data.
In one embodiment, the skill characterization processor classifies driver skill based stop and go maneuvers. The maneuver identification processor determines whether the vehicle is in a braking maneuver, and the system determines the vehicle longitudinal deceleration, the brake pedal position and the brake pedal rate from the brake pedal position. The skill characterization processor then classifies the driver's driving skill based on the brake pedal rate, the brake pedal position and the vehicle longitudinal deceleration generally under normal driving conditions. In one embodiment, the processor classifies driver skill by performing frequency analysis on the brake pedal rate using a discrete Fourier transform to find a frequency component of the brake pedal rate to obtain a power spectrum density.
Additional features of the present invention will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a representation of a vehicle dynamic system;
FIG. 2 is a plan view of a vehicle employing various vehicle sensors, cameras and communications systems;
FIG. 3 is a block diagram of a system providing invehicle characterization of driving skill, according to an embodiment of the present invention;
FIG. 4 is a block diagram of a system providing invehicle characterization of driving skill, according to another embodiment of the present invention;
FIG. 5 is a block diagram of a system providing invehicle characterization of driving skill, according to another embodiment of the present invention;
FIG. 6 is a flow chart diagram showing a process for determining a steeringengaged maneuver in the maneuver identification processor shown in the systems of FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 7 is a block diagram of a system for integrating road condition signals in the traffic/road condition recognition processor in the systems shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 8 is a flow chart diagram showing a processor for identifying roadway type for use in the traffic/road condition recognition processor in the systems of FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 9 is a flow chart diagram showing a process for providing data selection in the data selection processor in the systems shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 10 is a flow chart diagram showing a process for providing skill classification in the skill characterization processor of the systems shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 11 is a flow chart diagram showing a method for processing content of a feature extractor that can be used in the skill classification processor shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 12 is a block diagram of a skill characterization processor that can be used in the systems shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 13 is a flow chart diagram showing a method for processing content of a fuzzyclusteringbased data partition, according to an embodiment of the present invention;
FIG. 14 is a flow chart showing a method for processing content of a decision fuser, according to an embodiment of the present invention;
FIG. 15 is a block diagram of a skill characterization processor that can be used in the systems shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 16 is a block diagram of a skill classification processor that can be used in the systems shown in FIGS. 3, 4 and 5, according to another embodiment of the present invention;
FIG. 17 is a block diagram of a skill classification processor that can be used in the systems shown in FIGS. 3, 4 and 5, according to another embodiment of the present invention;
FIG. 18 is a block diagram of a skill classification processor that can be used in the systems shown in FIGS. 3, 4 and 5, according to another embodiment of the present invention;
FIG. 19 is a block diagram of a process maneuver model system that can be employed in the skill characterization processor of the systems shown in FIGS. 3, 4 and 5 for providing headway control, according to an embodiment of the present invention;
FIG. 20 is a block diagram of the driving skill diagnosis processor shown in the system of FIG. 19, according to an embodiment of the present invention;
FIG. 21 is a graph with frequency on the horizontal axis and magnitude on the vertical axis illustrating behavioral differences of various drivers;
FIG. 22 is a block diagram of a single level discrete wavelet transform;
FIG. 23 is a graph showing a histogram of retained energy for an expert driver, an average driver and a lowskill driver;
FIG. 24 is a graph with vehicle speed on the horizontal axis and throttle percentage on the vertical axis showing shifterror distance;
FIG. 25 is a graph with vehicle speed on the horizontal axis and throttle percentage on the vertical axis showing a delayed shift;
FIG. 26 is a graph with time on the horizontal axis and shaft torque on the vertical axis showing transmission shift duration;
FIG. 27 is a graph with time on the horizontal axis and input shaft speed on the vertical axis showing throttle and transmission shift relationships;
FIG. 28 is a system showing driver dynamics;
FIG. 29 is a system showing a vehicledriver crossover model;
FIG. 30 is a flow chart diagram showing a process that can be used by the maneuver identification processor in the systems of FIGS. 3, 4 and 5 for identifying a passing maneuver, according to an embodiment of the present invention;
FIG. 31 is a block diagram of a vehicle system including a vehicle stability enhancement system;
FIG. 32 is a block diagram of a command interpreter in the vehicle system shown in FIG. 31;
FIG. 33 is a block diagram of a feedback control processor used in the vehicle system shown in FIG. 31;
FIG. 34 is a flow chart diagram showing a process for generating a desired yaw rate signal in the yaw rate command generator and a desired vehicle sideslip velocity signal in the sideslip command generator;
FIG. 35 is a graph with vehicle speed on the horizontal axis and natural frequency on the vertical axis showing three graph lines for different driver skill levels;
FIG. 36 is a graph with vehicle speed on the horizontal axis and damping ratio on the vertical axis including three graph lines for different driver skill levels;
FIG. 37 is a flow chart diagram showing a process for providing a yaw rate feedback multiplier and a lateral dynamic feedback multiplier in the control gain adaption processor;
FIG. 38 is a flow chart diagram showing a process that can be used by the maneuver identification processor in the systems of FIGS. 3, 4 and 5 for identifying a left/right turn maneuver, according to an embodiment of the present invention;
FIG. 39 is a diagram of a classification decision tree that can be used by the skill characterization processor in the systems of FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 40 is a flow chart diagram showing a process that can be used by the maneuver identification processor in the systems of FIGS. 3, 4 and 5 for detecting a lanechanging maneuver, according to an embodiment of the present invention;
FIG. 41A and 41B are flow chart diagrams showing a process that can be used by the maneuver identification processor in the systems of FIGS. 3, 4 and 5 for identifying a vehicle highway on/offramp maneuver, according to an embodiment of the present invention;
FIG. 42 is a flow chart diagram showing a process that can be used by the maneuver identification processor in the systems of FIGS. 3, 4 and 5 for detecting a backup maneuver, according to an embodiment of the present invention;
FIG. 43 is a flow chart diagram showing a process for providing data selection in the data selection processor in the systems shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 44 is a plan view of a neural network that can be used in the skill characterization processor of the systems shown in FIGS. 3, 4 and 5, according to an embodiment of the present invention;
FIG. 45 is a block diagram of a driving skill characterization system based on datadriven approaches;
FIG. 46 is a block diagram of a skill characterization system that uses the same signals and measurements, but employs different skill classifiers;
FIG. 47 is a block diagram of a skill characterization system that employs an ultimate classifier combination scheme using only two skill classification modules;
FIG. 48 is a block diagram of a skill characterization system that employs a combination of multiple skill characterization modules based on different signals and measurements;
FIG. 49 is a block diagram of a skill characterization processor that can be used in the systems of FIGS. 3, 4 and 5 that includes a level1 combination, according to an embodiment of the present invention; and
FIG. 50 is a block diagram of a decision fusion processor that can be used in the systems of FIGS. 3, 4 and 5, according to another embodiment of the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTSThe following discussion of the embodiments of the invention directed to an adaptive vehicle control system that considers a driver's driving skill based on stopandgo driving behavior is merely exemplary in nature, and is in no way intended to limit the invention or its applications or uses.
The present invention provides various embodiments for an adaptive vehicle control system that adapts to one or both of driving environment and the driver's driving skill. Typical adaptive control systems consist of control adaptation algorithms. The present invention addresses driving skill environment and a driver's driving characteristics to recognize a driver's driving skill based on his/her driving behavior, as well as vehicle control adaptation to the recognized driving skill to provide the most desirable vehicle performance to the driver. In order to provide a vehicle driver with the most desirable performance tailored to a specific driving characteristic, vehicle control adaptation can be realized in various ways. For example, these techniques include using differential braking or rear wheel steering to augment vehicle dynamic response during various vehicle maneuvers. In the present invention, the control adaptation of an active front steering (AFS) variable gear ratio (VGR) system can be used.
In one nonlimiting embodiment, the invention provides an adaptive control system for VGR steering, where the vehicle steering ratio varies not only with vehicle speed, but also with driving conditions as typically indicated by the vehicle handwheel angle. Further, the control adaptation takes into account the driver's driving skill or characteristics. The resulting adaptive VGR provides tailored vehicle performance to suit a wide range of driving conditions and driver's driving characteristics.
To enable control adaptation for driving characteristics, the present invention provides an innovative process that recognizes a driver's driving characteristics based on his/her driving behavior. In particular, the present invention shows how driving skill can be characterized based on the driver's control input and vehicle motion during various vehicle maneuvers. The driving skill recognition provides an assessment of a driver's driving skill, which can be incorporated in various vehicle control and driver assistance systems, including the adaptive AFS VGR system.
A vehicle and its driver are an integral part of a dynamic system manifested by the performance of the vehicle. This is represented by a dynamic vehicle system 780 shown in FIG. 1 including a vehicle 782 and its driver 788. The driver 788 controls the vehicle 782 using vehicle control 784 and vehicle dynamics 786 that act to cause the vehicle 782 to perform in the desired manner. While the vehicle 782, as a mechanical system possessing various dynamic characteristics understandable through common physics, can be used to deliver certain performance measures, such as speed, yaw rate, acceleration, position, these performance measures can be effected by the control 784 equipped in the vehicle 782 to alter its commands. Further, the vehicle 782 and the control 784 both receive driver commands, whether through mechanical or electrical interfaces, to decide the desired actions that the vehicle will perform. As a result, the driver 788 holds the ultimate key to the performance of the vehicle 782 through the way various commands are generated in response a driver's need of the desired vehicle maneuvers. Therefore, given the same vehicle and the same desired maneuver, its performance will vary from one to the other based on the difference among the various drivers taking charge of the vehicle. The difference between each drivers capabilities in commanding the vehicle 782 in its dynamical sense shows the difference of the driver's skill, which can be observed and analyzed through the vehicle performance of given maneuvers.
The process of driving skill recognition contains two parts, namely, identification of driving maneuvers and processing of sensor data collected during the relevant maneuvers. While driving skills can be accessed through data from specific maneuvers, it can also be assessed without relying on any of the specific maneuvers. As it is recognized that lower skilled drivers apparently lack certain parts of vehicle handling capabilities that expert drivers posses, it is logical to treat an expert driver as an ideal driving machine that does every part of the driving maneuver correctly. For an average driver or a lowskill driver, he or she will behave differently with various degrees much like a less than perfect driving machine. Therefore, a driving diagnosis process can be employed to analyze the behavior of a driver and comparing it with a template of an expert driver. As a result, the driving skill can also be characterized successfully using this approach.
In order to facilitate the control adaptation based on driving skill, the present invention provides a system and method for achieving invehicle characterization of a driver's driving skill using behavioral diagnosis in various driving maneuvers. The characterization result can be used in various vehicle control algorithms that adapt to a driver's driving skill. However, such control algorithms are neither prerequisites nor components for the invehicle characterization system.
The steering gear ratio of a vehicle represents a proportional factor between the steering wheel angle and the road wheel angle. Conventional steering systems have a fixed steering gear ratio where the steering wheel ratio remains substantially constant except for minor variations due to vehicle suspension geometry. To improve vehicle handling, VGR steering systems have been developed. With a VGR steering system, the gear ratio varies with vehicle speed so that the number of steering wheel turns is reduced at low speeds and the highspeed steering sensitivity is suppressed. However, current AFS VGR systems mainly focus on oncenter handling where the steering wheel angle is relatively small and the tires are in their linear region. Moreover, the design is a compromise to meet the needs of all types of drivers with one single speedNGR curve.
The AFS VGR adaptive control system of the invention includes an enhanced VGR that alters the steering ratio according to vehicle speed and the steering angle to suit different driving conditions, and an adaptive VGR that adjusts the steering ratio based on a driver's skill level.
As mentioned above, known VGR systems alter the steering ratio based on vehicle speed only. However, the corresponding steadystate vehicle yaw rate gain is mainly for oncenter handling where the vehicle tires are operating in their linear region. When the handwheel angle gets relatively large, the steadystate rate gain drops due to tire nonlinearity.
To compensate for the effects of tire nonlinearity and to provide an approximately uniform yaw rate gain at each vehicle speed, the present invention proposes an enhanced VGR that is extended to be a function of both vehicle speed v and the vehicle handwheel angle δ<sub>HWA</sub>. The enhanced VGR has the same value as a conventional VGR if the handwheel angle δ<sub>HWA </sub>is smaller than a threshold δ<sub>th </sub>δ<sub>th</sub>, and decreases as the handwheel angle δ<sub>HWA </sub>increases beyond the threshold δ<sub>th</sub>. The threshold δ<sub>th </sub>is the critical steering angle and steering angles larger than the threshold δ<sub>th </sub>result in vehicle tires operating in their nonlinear region.
To accommodate the various needs of different drivers, the adaptive VGR system of the present invention incorporates driving skill level, together with the vehicle speed v and the handwheel angle δ<sub>HWA</sub>, to determine the variable gear ratio. Enhanced VGR r<sub>enhanced </sub>can be calculated by:
<FORM>r<sub>enhanced</sub>=f<sub>enhanced</sub>(v, δ<sub>HWA</sub>, S) (1)</FORM>
Where S<sub>represents </sub>driving skill level, such as S=15 where 1 represents a lowskill driver and 5 represents a highskill driver.
Adaptive VGR r<sub>adaptive </sub>can be further derived from the enhanced VGR as:
<maths id="MATHUS00001" num="00001"><math overflow="scroll"><mtable><mtr><mtd><mtable><mtr><mtd><mrow><msub><mi>r</mi><mi>adaptive</mi></msub><mo>=</mo><mrow><msub><mi>f</mi><mi>adaptive</mi></msub><mo></mo><mrow><mo>(</mo><mrow><mi>v</mi><mo>,</mo><msub><mi>δ</mi><mi>HWA</mi></msub><mo>,</mo><mi>S</mi></mrow><mo>)</mo></mrow></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mo>=</mo><mrow><mrow><mi>k</mi><mo></mo><mrow><mo>(</mo><mrow><mi>v</mi><mo>,</mo><msub><mi>δ</mi><mi>HWA</mi></msub><mo>,</mo><mi>S</mi></mrow><mo>)</mo></mrow></mrow><mo>×</mo><mrow><msub><mi>f</mi><mi>enhanced</mi></msub><mo></mo><mrow><mo>(</mo><mrow><mi>v</mi><mo>,</mo><msub><mi>δ</mi><mi>HWA</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mtd></mtr></mtable></mtd><mtd><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where k(v, δ<sub>HWA</sub>, S) is a scaling factor.
The vehicle speed v and the handwheel angle δ<sub>HWA </sub>can be measured by invehicle sensors, such as wheel speed sensors and a steering angle sensor. Driving skill level can be set by the driver or characterized by algorithms based on vehicle sensor information.
Because skilled drivers typically prefer the vehicle to be more responsive, a lower gear ratio is preferred to yield a higher yaw rate gain. On the other hand, drivers need to have the capability to control the vehicle as it becomes more sensitive with a lower gear ratio, especially at higher speeds. In other words, a low gear ratio at higher speeds will only be available to skillful drivers. Therefore, the scaling factor k is smaller for drivers with a higher skill level.
In order to facilitate control adaptation based on driving skill, the present invention further proposes a method and system for achieving an invehicle characterization of a driver's driving skill. The characterization result can be used in various vehicle control algorithms that adapt to a driver's driving skill. However, such control algorithms are neither prerequisites nor components for the invehicle characterization system of the invention.
FIG. 2 is a plan view of a vehicle 10 including various sensors, vision systems, controllers, communications systems, etc., one or more of which may be applicable for the adaptive vehicle control systems discussed below. The vehicle 10 includes midrange sensors 12, 14 and 16 at the back, front and sides, respectively, of the vehicle 10. A front vision system 20, such as a camera, provides images towards the front of the vehicle 10 and a rear vision system 22, such as a camera, provides images towards the rear of the vehicle 10. A GPS or a differential GPS system 24 provides GPS coordinates, and a vehicletoinfrastructure (V2I) or vehicletovehicle (V2V), which can be collectively referred to as V2X, communications system 26 provides communications between the vehicle 10 and other structures, such as other vehicles, roadside systems, etc., as is well understood to those skilled in the art. The vehicle 10 also includes an enhanced digital map (EDMAP) 28 and an integration controller 30 that provides surround sensing data fusion.
FIG. 3 is a block diagram of an adaptive control system 40 that provides invehicle characterization of a driver's driving skill, according to an embodiment of the present invention. The system 40 has application for characterizing a driver's driving skill based on various types of characteristic maneuvers, such as curvehandling maneuvers, vehicle launching maneuvers, left/right turns, Uturns, highway on/offramp maneuvers, lane changes, etc.
The system 40 employs various known vehicle sensors identified as an invehicle sensor suite 42. The sensor suite 42 is intended to include one or more of a handwheel angle sensor, a yaw rate sensor, a vehicle speed sensor, wheel speed sensors, longitudinal accelerometer, lateral accelerometer, headway distance sensors, such as a forwardlooking radarlidar or a camera, a throttle opening sensor, a brake pedal position/force sensor, etc., all of which are well known to those skilled in the art. The sensor signals from the sensor suite 42 are provided to a signal processor 44 that processes the sensor measurements to reduce sensor noise and sensor biases. Various types of signal processing can be used by the processor 44, many of which are well known to those skilled in the art.
The processed sensor signals from the signal processor 44 are provided to a maneuver identification processor 46, a data selection processor 48 and a traffic/road condition recognition processor 50. The maneuver identification processor 46 identifies various types of characteristic maneuvers performed by the driver. Such characteristic maneuvers include, but are not limited to, vehicle headway control, vehicle launching, highway on/offramp maneuvers, steeringengaged maneuvers, which may be further separated into curvehandling maneuvers, lane changes, left/right turns, Uturns, etc. Details of using those types of characteristic maneuvers for skill characterization will be discussed below. Maneuver identification is provided because specific methodologies used in skill characterization may differ from one type of characteristic maneuver to another. For example, characterization based on headway control behaviors during vehicle following use headway distance and closing speed from a forwardlooking radar, while characterization based on curvehandling maneuvers involves yaw rate and lateral acceleration. Therefore, the type of maneuvers conducted by the driver need to be identified. When the maneuver identification processor 46 identifies a particular type of maneuver of the vehicle 10, it will output a corresponding identification value to the data selection processor 48.
Not all maneuvers can be easily identified from invehicle motion sensor measurements. Further, some maneuvers reveal driving skill better than others. Such maneuvers that help distinguish driving skill are referred to as characteristic maneuvers. Consequently, only data corresponding to characteristic maneuvers is selected and stored for the skill characterization. The maneuver identification processor 46 identifies characteristic maneuvers based on any combination of invehicle sensors, such as a vehicle speed sensor, a longitudinal acceleration sensor, a steering wheel angle sensor, a steering angle sensor at the wheels, a yaw rate sensor, a lateral acceleration sensor, a brake pedal position sensor, a brake pedal force sensor, an acceleration pedal position sensor, an acceleration pedal force sensor, a throttle opening sensor, a suspension travel sensor, a roll rate sensor, a pitch rate sensor, as well as longrange and shortrange radars or lidars and ultrasonic sensors, cameras, GPS or DGPS map information, and vehicletoinfrastructure/vehicle communication. The maneuver identification processor 46 may further utilize any combination of information processed from the measurements from those sensors, including the derivatives and integrated signals. Once the maneuver identification processor 46 detects a characteristic maneuver, it informs the data selection processor 48 to start recording data. The maneuver identification processor 46 also identifies the end of the maneuver so that the data selection processor 48 stops recording. The traffic information from the recognition processor 50 may also be incorporated in the recording process to determine whether the maneuver contains adequate information for skill characterization.
The traffic/road condition recognition processor 50 uses the sensor signals to recognize traffic and road conditions. Traffic conditions can be evaluated based on traffic density. Roadway conditions include at least two types of conditions, specifically, roadway type, such as freeway/highway, city streets, winding roads, etc., and ambient conditions, such as dry/wet road surfaces, foggy, rainy, etc. Systems that recognize road conditions based on sensor input are well known to those skilled in the art, and need not be described in detail herein.
A skill characterization processor 52 receives information of a characteristic maneuver from the maneuver identification processor 46, the traffic and road condition information from the traffic/road condition recognition processor 50 and the recorded data from the data selection processor 48, and classifies driving skill based on the information. As the maneuver identifier processor 46 determines the beginning and the end of a maneuver, the data selection processor 48 stores the corresponding data segment based on the variables Start_flag, End_flag, t<sub>start </sub>and t<sub>end</sub>.
The output from the skill characterization processor 52 is a value that identifies a driving skill over a range of values, such as a one for a low skill driver up to a five for high skill driver. The particular skill characterization value is stored in a skill profile triplogger 54 for each particular characteristic maneuver identified by the identification processor 46. The triplogger 54 can be a simple data array where each entry array contains a time index, the maneuver information, such as maneuver identifier M<sub>id</sub>, traffic/road condition information, such as traffic index and road index, and the corresponding characterization result. To enhance the accuracy and robustness of the characterization, a decision fusion processor 56 integrates recent results with previous results stored in the triplogger 54.
FIG. 4 is a block diagram of an adaptive control system 60 that provides invehicle characterization of driving skill, according to another embodiment of the present invention, where like elements to the system 40 are identified by the same reference numeral. In the system 60, a vehicle positioning processor 62 is included that receives the processed sensor measurement signals from the signal processor 44. In addition, the system 60 includes a global positioning system (GPS) or differential GPS 64, such as the GPS 24, and an enhanced digital map 66, such as the EDMAP 28. Information from the vehicle positioning processor 62 is provided to the traffic/road condition recognition processor 50 to provide vehicle location information. Additionally, the system 60 includes a surround sensing unit 68, which comprises longrange and shortrange radars/lidars at the front of the vehicle 10, shortrange radars/lidars on the sides and/or at the back of the vehicle 10, or cameras around the vehicle 10, and a vehicletovehicle/infrastructure communication system 70 that also provides information to the traffic/road condition recognition processor 50 for additional information concerning traffic and road conditions.
The vehicle positioning processor 62 processes the GPS/DGPS information, as well as information from vehicle motion sensors, to derive absolute vehicle positions in earth inertial coordinates. Other information, such as vehicle heading angle and vehicle speed, may also be derived. The vehicle positioning processor 62 further determines vehicle location with regard to the EDMAP 66 and retrieves relevant local road/traffic information, such as road curvature, speed limit, number of lanes, etc. Various techniques for GPS/DGPS based positioning and vehicle locating are wellknown to those skilled in the art. Similarly, techniques for surround sensing fusion and vehicletovehicle/infrastructure (V2X) communications are also well known to those skilled in the art. Thus, by using this information, the traffic/road condition recognition processor 50 has a stronger capability of more accurately recognizing traffic and road conditions.
FIG. 5 is a block diagram of an adaptive control system 80 similar to the control system 60, where like elements are identified by the same reference numeral, according to another embodiment of the present invention. In this embodiment, the system 80 is equipped with a driver identification unit 82, a skill profile database 84 and a trend analysis processor 86 to enhance system functionality. The driver identification unit 82 can identify the driver by any suitable technique, such as by pressing a key fob button. Once the driver is identified, his or her skill profile during each trip can be stored in the skill profile database 84. Further, a history separate skill profile can be built up for each driver over multiple trips, and can be readily retrieved to be fused with information collected during the current vehicle trip. Further, a deviation of the skill exhibited in the current trip from that in the profile history may imply a change in driver state. For example, a high skill driver driving poorly may indicate that he or she is in a hurry or under stress.
As mentioned above, various characteristic maneuvers can be used in the skill characterization, such as vehicle headway control, vehicle launching, highway on/off ramp maneuvers, and steeringengaged maneuvers, which referred to maneuvers that involve a relatively large steering angle as and/or a relatively large vehicle yaw rate. The steeringengaged maneuvers may be further broken down into subcategories, such as lane changes, left/right turns, Uturns and curvehandling maneuvers where a vehicle is negotiating a curve. Further discussions of identifying those specific subcategories have special types of steeringengaged maneuvers will be included together with the corresponding illustration.
In one embodiment, the steeringengaged maneuvers are treated as one type of characteristic maneuver. Accordingly, the reliable indicators of a steeringengaged maneuver include a relatively large vehicle yaw rate and/or a relatively large steering angle. In one embodiment, the yaw rate is used to describe the operation of the maneuver identification processor 46, where a steeringangle based data selector would work in a similar manner. To maintain the data integrity of the associated steeringengaged maneuver, a certain period, such as T=2 s, of data before and after the steeringengaged maneuver is also desired.
FIG. 6 is a flow chart diagram 280 showing a process that can be used by the maneuver identification processor 46 to determine steeringengaged maneuvers. The maneuver identifier value M<sub>id </sub>is used to identify the type of the characteristic maneuver, as will be discussed in further detail below. Each of these discussions will use a maneuver identifier value M<sub>id </sub>of 0, 1 or 2 to identify the maneuver. This is merely for illustration purposes in that a system that incorporated maneuver detection for all of the various maneuvers would use a different value for the maneuver identifier value M<sub>id </sub>for each separate maneuver based on the type of specific characteristic maneuver.
At box 282, the maneuver identification algorithm begins by reading the filtered yaw rate signal w from the signal processor 44. The algorithm then proceeds according to its operation states denoted by two Boolean variables Start_flag and End_flag, where Start_flag is initialized to zero and End_flag is initialized to one. At block 284, the algorithm determines whether Start_flag is zero.
If Start_flag is zero, meaning that the vehicle 10 is not in a steeringengaged maneuver, the algorithm determines if the vehicle 10 has started a steeringengaged maneuver based on the yaw rate signal ω at decision diamond 286 by determining whether ω(t)≧ω<sub>med</sub>, where ω<sub>med </sub>is 5° per second in one nonlimiting embodiment. If this condition is met, meaning that the vehicle 10 has started a steeringengaged maneuver, the algorithm sets Start_flag to one and End_flag to zero at box 288, and starts a timer t<sub>start</sub>=t−T at box 290. If the condition of the decision diamond 286 has not been met, meaning that the vehicle 10 has not started a steeringengaged maneuver, then the algorithm returns and waits for the next sensor measurement at block 292.
If Start_flag is not zero at the block 284, meaning that the vehicle 10 is in a steeringengaged maneuver, the algorithm determines whether the steeringengaged maneuver is completed by determining whether the yaw rate signal ω has been reduced to near zero at block 294 by max(ω(t−T:t))≦ω<sub>small</sub>, where ω<sub>small </sub>is 2° per second in one nonlimiting embodiment. If this condition is not met, meaning that the vehicle 10 is still in the steeringengaged maneuver, the algorithm returns to the block 292 to collect the next cycle of data. If the condition of the block 294 has been met, meaning that the vehicle 10 has completed the steeringengaged maneuver, the algorithm sets Start_flag to zero, End_flag to one and the timer t<sub>end</sub>=t−T at box 296. The algorithm then sets the maneuver identifier value M<sub>id </sub>to one at box 298 meaning that a steeringengaged maneuver has just occurred, and is ready to be classified.
The traffic/road condition recognition processor 50 detects traffic conditions. The traffic conditions can be classified based on traffic density, for example, by using a traffic density condition index Traffic<sub>index</sub>. The higher the index Traffic<sub>index</sub>, the higher the traffic density. Such a traffic index can also be derived based on measurements from sensors, such as radarlidar, camera and DGPS with intervehicle communication.
As an example, the processor 50 can be based on a forwardlooking radar as follows. The detection process involves two steps, namely, inferring the number of lanes and computing the traffic index Traffic<sub>index</sub>. Usually, radar measurements are processed to establish and maintain individual tracks for moving objects. Such information is stored in a buffer for a short period of time, such as five seconds, the current road geometry can be estimated by fitting individual tracks with the polynomials of the same structure and parameters except their offsets. The estimated offsets can be used to infer the number of lanes, as well as the relative position of the lane occupied by the subject vehicle.
With the estimate of the number of lanes, the traffic index Traffic<sub>index </sub>can be determined as:
<FORM>Traffic<sub>index</sub>=f(N<sub>lane</sub>, N<sub>track</sub>, R, v) (3)</FORM>
Where N<sub>lane </sub>is the number of lanes, N<sub>track </sub>is the number of vehicles being tracked, R is the range to the preceding vehicle and v is the speed of the subject vehicle.
An alternative and also more objective choice is to use the average range between vehicles in the same lane and the average speed on the road. However, the computation of such variables would be more complicated.
An example of the function of equation (3) can be given as:
<maths id="MATHUS00002" num="00002"><math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mi>Traffic</mi><mi>index</mi></msub><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mrow><mrow><mi>a</mi><mo></mo><mfrac><msub><mi>N</mi><mi>track</mi></msub><msub><mi>N</mi><mi>lane</mi></msub></mfrac></mrow><mo>+</mo><mrow><mi>b</mi><mo></mo><mfrac><mi>v</mi><mi>R</mi></mfrac></mrow></mrow><mo>,</mo></mrow></mtd><mtd><mrow><msub><mi>N</mi><mi>track</mi></msub><mo>></mo><mn>0</mn></mrow></mtd></mtr><mtr><mtd><mrow><mn>0</mn><mo>,</mo></mrow></mtd><mtd><mrow><msub><mi>N</mi><mi>track</mi></msub><mo>=</mo><mn>0</mn></mrow></mtd></mtr></mtable></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>4</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Thus, the larger N<sub>track</sub>/N<sub>lane </sub>and v/R, the larger the traffic index Traffic<sub>index</sub>, i.e., the density of traffic. For the situation where there is no preceding or forward vehicle i.e., N<sub>track </sub>equals zero, the traffic index Traffic<sub>index </sub>is set to zero.
It is noted that in the cases where there are multiple lanes, but no vehicles in the adjacent lanes, the number of lanes will be estimated as one, which is incorrect. However, in such cases, the driver has more freedom to change lanes instead of following close to the preceding vehicle. Consequently v/R should be small and so should the traffic index Traffic<sub>index</sub>.
A second embodiment for recognizing traffic conditions in terms of traffic density is based on DGPS with intervehicle communication. With the position and motion information of surrounding vehicles from intervehicle communication, the subject vehicle can assess the number of surrounding vehicles within a certain distance, as well as the average speed of those vehicles. Further, the subject vehicle can determine the number of lanes based on the lateral distance between itself and its surrounding vehicles. To avoid counting vehicles and lanes for opposing traffic, the moving direction of the surrounding vehicles should be taken into consideration. With this type of information, the traffic index Traffic<sub>index </sub>can be determined by equation (4).
While equations (3) and (4) use the vehicles headway distance R<sub>hwd </sub>to the preceding vehicle as the range value R, it can be more accurate to use a weighted range variable based on the longitudinal gaps between vehicles in the same lane as the range variable R when situations permit. With a sideview sensor to detect a passing vehicle, the relative speed Δv between the passing vehicle and the subject vehicle can be detected to provide timing ΔT between one vehicle and another. Therefore, the ith occurrence of the gap R<sub>gap </sub>between vehicles in adjacent lanes can be estimated as:
<FORM>R<sub>gap</sub>(i)=Δv*ΔT (5)</FORM>
The range variable R can be estimated as a weighted average between the headway distance R<sub>hwd </sub>and the running average of the adjacent lane vehicle gaps as:
<maths id="MATHUS00003" num="00003"><math overflow="scroll"><mtable><mtr><mtd><mrow><mi>R</mi><mo>=</mo><mrow><msub><mi>aR</mi><mi>hwd</mi></msub><mo>+</mo><mrow><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><mi>a</mi></mrow><mo>)</mo></mrow><mo></mo><mfrac><mrow><munderover><mo>∑</mo><mn>1</mn><mi>N</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><msub><mi>R</mi><mi>gap</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow><mi>N</mi></mfrac></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where α is a parameter between 0 and 1.
When a rearlooking sensor is available, the trailing vehicle distance R<sub>trail </sub>can be measured. This measurement can further be incorporated for range calculation, such as:
<maths id="MATHUS00004" num="00004"><math overflow="scroll"><mtable><mtr><mtd><mrow><mi>R</mi><mo>=</mo><mrow><mrow><mfrac><mi>a</mi><mn>2</mn></mfrac><mo></mo><mrow><mo>(</mo><mrow><msub><mi>R</mi><mi>hwd</mi></msub><mo>+</mo><msub><mi>R</mi><mi>trail</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>+</mo><mrow><mrow><mo>(</mo><mrow><mn>1</mn><mo></mo><mi>a</mi></mrow><mo>)</mo></mrow><mo></mo><mfrac><mrow><munderover><mo>∑</mo><mn>1</mn><mi>N</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><msub><mi>R</mi><mi>gap</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow><mi>N</mi></mfrac></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>7</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Traffic density can further be assessed using vehicletovehicle (V2V) communications with the information of GPS location communicated among the vehicles. While the vehicletovehicle communications equipped vehicle penetration is not 100%, the average distances between vehicles can be estimated based on the geographic location provided by the GPS sensor. However, the information obtained through vehicletovehicle communications needs to be qualified for further processing. First, a map system can be used to check if the location of the vehicle is along the same route as the subject vehicle by comparing the GPS detected location of the object vehicle with the map data base. Second, the relative speed of this vehicle and the subject vehicle is assessed to make sure the vehicle is not traveling in the opposite lane. Similar information of the object vehicle so relayed through multiple stages of the vehicletovehicle communications can be analyzed the same way. As a result, a collection of vehicle distances to each of the vehicletovehicle communications equipped vehicles can be obtained. Average distances D<sub>V2V </sub>of these vehicles can be computed for an indication of traffic density.
The traffic index Traffic<sub>index </sub>can further be improved by:
<FORM>Traffic<sub>index</sub>=pC<sub>1</sub>D<sub>V2V</sub>+C<sub>2</sub>Traffic<sub>index</sub><sub><sub2>—</sub2></sub><sub>raw </sub> (8)</FORM>
Where traffic<sub>indexraw </sub>is based on equation (4), p is the percentage penetration of the vehicletovehicle communications equipped vehicles in certain locale determined by a database and GPS sensing information, and where C<sub>1 </sub>and C<sub>2 </sub>are weighting factors.
The traffic index Traffic<sub>index </sub>can be computed using any of the abovementioned approaches. However, it can be further rationalized for its intended purposes by using this index to gauge driver's behavior to assess the driving skill in light of the traffic conditions. For this purpose, the traffic index Traffic<sub>index </sub>can further be modified based on its geographic location reflecting the norm of physical traffic density as well as the average driving behavior.
Statistics can be established offline to provide the average unscaled traffic indices based on any of the above calculations for the specific locations. For example, a crowded city as opposed to a metropolitan area or even a campus and everywhere else in the world. This information can be stored in an offsite installation or infrastructure accessible through vehicletoinfrastructure communications. When such information is available, the traffic index Traffic<sub>index </sub>can be normalized against the statistical mean of the specific location, and provide a more accurate assessment of the driving skill based on specific behavior over certain detected maneuvers.
The traffic/road condition recognition processor 50 also recognizes road conditions. Road conditions of interest include roadway type, road surface conditions and ambient conditions. Accordingly, three indexes can be provided to reflect the three aspects of the road conditions, particularly road<sub>type</sub>, road<sub>surface </sub>and road<sub>ambient</sub>, respectively.
FIG. 7 is a block diagram of a system 300 that can be used to recognize and integrate these three aspects of the road condition. The system 300 includes a road type determination processor 302 that receives sensor information from various sensors in the vehicle 10 that are suitable to provide roadway type. The output of the road type determination processor 302 is the roadway condition index road<sub>type</sub>. The roadway types can be categorized in many different ways. For driving characterization, the interest is in how much freedom the roadway provides to a driver. Therefore, it is preferable to categorize roadways according to their speed limit, the typical throughput of the roadway, the number of lanes in each travel direction, the width of the lanes, etc. For example, the present invention categorizes roadways in four types, namely, urban freeway, urban local, rural freeway and rural local. The two freeways have a higher speed than the two local roadways. The urban freeway typically has at least three lanes in each travel of direction and the rural freeway typically has one to two lanes in each direction. The urban local roadways have wider lanes and more traffic controlled intersections than the rural local roadway. Accordingly, the roadway type can be recognized based on the following road characteristics, namely, the speed limit, the number of lanes, the width of the lanes and the throughput of the road if available.
For systems of this embodiment of the invention, the images from a forwardlooking camera can be processed to determine the current speed limit based on traffic sign recognition, the number of lanes and the lane width. In other embodiments, the vehicles can be equipped with a GPS or DGPS with enhanced digital map or GPS or DGPS with vehicletovehicle infrastructure communications, or both. If an EDMAP is available, the EDMAP directly contains the road characteristics information. The EDMAP may even contain the roadway type, which can be used directly. If vehicletoinfrastructure communications is available, the vehicle will be able to receive those road characteristics and/or the roadway type in the communication packets from the infrastructure.
With this information, the processor 302 categorizes the roadway type based on the road characteristics, or the vehicle may directly use the roadway type from the EDMAP 28 with the communications.
FIG. 8 is a flow chart diagram 320 showing a process to provide roadway type recognition in the processor 302, according to one nonlimiting embodiment of the present invention. In this example, the roadway type condition index road<sub>type </sub>is identified as 1 at box 322, as 2 at box 324, as 3 at box 326 and as 4 at box 328, where index 1 is for an urban freeway, index 2 is for a rural freeway, index 3 is for an urban local road and index 4 is for a rural local road. The roadway type recognition starts with reading the four characteristics. If the current speed limit is above 55 mph at block 330, the roadway is regarded to be either an urban freeway or a rural freeway. The process then determines whether the number of lanes is greater than two at block 332, and if so, the roadway is a road type 1 for an urban freeway at the box 322, otherwise the roadway is a rural freeway type 2 having more than two lanes at the box 324. If the speed limit is less than 55 mph at the block 330, the algorithm determines whether the number of lanes is greater than or equal to 2 at block 334. If the number of lanes is at least two, the road is considered to be an urban local roadway type 3 at the box 326, otherwise it is a rural local roadway of type 4 at the box 328.
The roadway surface affects the ease of the control of a vehicle. For example, a lowcoefficient surface has limited capability in providing longitudinal and lateral tire forces. As a result, a driver needs to be more careful driving on a low coefficient of friction surface than on a high coefficient or friction surface. Similarly, the disturbance generated by a rough surface makes the ride less comfortable and puts a higher demand on the drivers control over the vehicle. Such factors usually cause a driver to be more conservative. Because both the detection of the friction coefficients of a road surface and the detection of rough roads using invehicle sensors are wellknown to those skilled in the art, a more detailed discussion is not needed herein.
The present invention uses the detection results to generate the road surface condition index road<sub>surface </sub>to reflect the condition of the road surface. For example, a road surface condition index road<sub>surface </sub>of zero represents a good surface that has a high coefficient of friction and is not rough, a road surface condition index road<sub>surface </sub>of one represents a moderatecondition surface that has a medium coefficient of friction and is not rough, and a road surface condition index road<sub>surface </sub>of 2 represents a bad surface that has a low coefficient or is rough. Returning to FIG. 7, the system 300 includes a road surface condition processor 304 that receives the sensor information, and determines whether the road surface condition index road<sub>surface </sub>is for a moderate coefficient road surface at box 308 or a rough coefficient at box 310.
The ambient conditions mainly concern factors that affect visibility, such as light condition (day or night), weather condition, such as fog, rain, snow, etc. The system 300 includes an ambient condition processor 306 that provides the road ambient condition index road<sub>ambient</sub>. The ambient condition processor 306 includes a light level detection box 312 that provides an indication of the light level, a rain/snow detection box 314 that provides a signal of the rain/snow condition and a fog detection box 316 that provides a detection of whether fog is present, all of which are combined to provide the road ambient condition index road<sub>ambient</sub>.
The sensing of the light condition by the box 312 can be achieved by a typical twilight sensor that senses light level as seen by a driver for automatic headlight control. Typically, the light level output is a current that is proportional to the ambient light level. Based on this output, the light level can be computed and the light condition can be classified into several levels, such as 02 where zero represents bright daylight and two represents a very dark condition. For example, light<sub>level</sub>=0 if the computed light level is higher than the threshold L<sub>high</sub>, where L<sub>high</sub>=300 lux, light<sub>level</sub>=1 if the light level is between thresholds L<sub>high </sub>and L<sub>low </sub>where L<sub>low </sub>can be the headlight activation threshold or 150 lux, and light<sub>level</sub>=2 if the light level is lower than the threshold L<sub>low</sub>.
The rain/snow condition can be detected by the box 314 using an automatic rain sensor that is typically mounted on the inside surface of the windshield and is used to support the automatic mode of windshield wipers. The most common rain sensor transmits an infrared light beam at a 45° angle into the windshield from the inside near the lower edge, and if the windshield is wet, less light makes it back to the sensor. Some rain sensors are also capable of sensing the degree of the rain so that the wipers can be turned on at the right speed. Therefore, the rain/snow condition can be directly recognized based on the rain sensor detection. Moreover, the degree of the rain/snow can be determined based by either the rain sensor or the windshield wiper speed. Alternatively, the rain/snow condition can be detected solely based on whether the windshield wiper has been on for a certain period of time, such as 30 seconds. The rain/snow condition can be categorized into 1+N levels with rain<sub>level</sub>=0 representing no rain and rain<sub>level</sub>=i with i indicating the speed level of the windshield wiper since most windshield wipers operate at discrete speeds. Alternatively, if the vehicle is equipped with GPS or DGPS and a vehicletoinfrastructure communication, the rain/snow condition can also be determined based on rain/snow warnings broadcast from the infrastructure.
The fog condition can be detected by the box 316 using a forwardlooking camera or lidar. The images from the camera can be processed to measure the visibility distance, such as the meteorological visibility distance defined by the international commission on illumination as the distance beyond which a black object of an appropriate dimension is perceived with a contrast of less than 5%. A lidar sensor detects fog by sensing the microphysical and optical properties of the ambient environment. Based on its received fields of view, the lidar sensor is capable of computing the effective radius of the fog droplets in foggy conditions and calculates the extinction coefficients at visible and infrared wavelengths. The techniques for the fog detection based on a camera or lidar are wellknown to those skilled in the art, and therefore need not be discussed in significant detail herein. This invention takes results from those systems, such as the visibility distance from a camerabased fog detector or, equivalently, the extension coefficients at visible wavelengths from a lidarbased fog detection system, and classifies the following condition accordingly. For example, the foggy condition can be classified into four levels 03 with 0 representing no fog and 3 representing a highdensity fog. The determination of the fog density level based on the visibility distance can be classified as:
<maths id="MATHUS00005" num="00005"><math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mi>fog</mi><mi>level</mi></msub><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>0</mn><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><mi>visibility</mi><mo>≥</mo><msub><mi>visibilty</mi><mi>high</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><mn>1</mn><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msub><mi>visibility</mi><mi>med</mi></msub><mo>≤</mo><mi>visibilty</mi><mo><</mo><msub><mi>visibilty</mi><mi>high</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><mn>2</mn><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msub><mi>visibility</mi><mi>low</mi></msub><mo>≤</mo><mi>visibilty</mi><mo><</mo><msub><mi>visibilty</mi><mi>med</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><mn>3</mn><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><mi>visibilty</mi><mo><</mo><msub><mi>visibilty</mi><mi>low</mi></msub></mrow></mtd></mtr></mtable></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>9</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where exemplary values of the thresholds can be visibility<sub>high</sub>=140 m, visibility<sub>med</sub>=70 m and visibility<sub>low</sub>=35 m. Alternatively, if the vehicle 10 is equipped with GPS or DGPS and vehicletoinfrastructure communications; the foggy condition may also be determined based on the fog warnings broadcast from the infrastructure.
The road ambient condition index Road<sub>ambient </sub>then combines the detection results of the light condition, the rain/snow condition, and the foggy condition. The simplest way is to let Road<sub>ambient</sub>=[light<sub>level </sub>rain<sub>level </sub>fog<sub>level</sub>]<sup>T</sup>.
Alternatively, the road ambient condition index Road<sub>ambient </sub>could be a function of the detection results such as:
<maths id="MATHUS00006" num="00006"><math overflow="scroll"><mtable><mtr><mtd><mtable><mtr><mtd><mrow><msub><mi>Road</mi><mi>ambient</mi></msub><mo>=</mo><mrow><msub><mi>f</mi><mi>ambient</mi></msub><mo></mo><mrow><mo>(</mo><mrow><msub><mi>light</mi><mi>level</mi></msub><mo>,</mo><msub><mi>rain</mi><mi>level</mi></msub><mo>,</mo><msub><mi>fog</mi><mi>level</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mo>=</mo><mrow><msub><mo>∝</mo><mn>1</mn></msub><mo></mo><mrow><mrow><mo>×</mo><msub><mi>light</mi><mi>level</mi></msub></mrow><mo>+</mo></mrow><mo></mo><msub><mo>∝</mo><mn>2</mn></msub><mo></mo><mrow><msub><mi>rain</mi><mi>level</mi></msub><mo>+</mo></mrow><mo></mo><msub><mo>∝</mo><mn>3</mn></msub><mo></mo><mrow><mo>×</mo><msub><mi>fog</mi><mi>level</mi></msub></mrow></mrow></mrow></mtd></mtr></mtable></mtd><mtd><mrow><mo>(</mo><mn>10</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where α<sub>1</sub>, α<sub>2</sub>, and α<sub>3 </sub>are weighting factors that are greater than zero. Note that the larger each individual detection result is, the worse the ambient condition is for driving. Consequently, the larger the ambient road condition index Road<sub>ambient </sub>the worse the ambient condition is for driving.
The three road condition indexes, Road<sub>type</sub>, Road<sub>surface</sub>, Road<sub>ambient</sub>, are then combined by the system 300 to reflect the road condition. The combination can be a simple combination, such as Road<sub>index</sub>=[road<sub>type </sub>roads<sub>surface </sub>road<sub>ambient</sub>]<sup>T</sup>, or a function, such as Road<sub>index</sub>=f<sub>road</sub>(road<sub>type </sub>road<sub>surface </sub>road<sub>ambient</sub>), which could be a lookup table.
Thus, recognized traffic/road conditions can be used in the skill characterization processor 52 in two ways. First, the data selection processor 48 determines the portion of data to be recorded for skill classification based on the maneuver identifier value M<sub>id </sub>and the recognized traffic/road conditions. Second, the skill classification processor 52 classifies driving skill based on driver inputs and vehicle motion, as well as the traffic/road conditions. That is, the traffic/road condition indexes are part of the discriminant features (discussed below) used in the skill classification.
Not all data measured during driving are useful. In fact, it would be unnecessary and uneconomical to record all the data. In this embodiment, information regarding the maneuver type and traffic/road conditions helps determine whether the current driving behavior is valuable for the characterization. If so, the data is recorded. For example, if the traffic is jammed (e.g., traffic<sub>index</sub>>traffic<sub>th</sub>), it may be meaningless to characterize the skill based on headway distance. In such cases, the data should not be stored. On the other hand, if the traffic is moderate, the data should be recorded if the maneuver is a characteristic maneuver. To maintain the completeness of the recording, a short period (e.g., 1 second) of data is always recorded and refreshed. Once the maneuver identifier detects the beginning of a characteristic maneuver, the data selection module retains the short period of data and starts recording new data until the maneuver identifier detects the end of the maneuver. The recorded data is then used for skill classification. To maintain the completeness of the recording, a short period of data is always recorded and refreshed.
FIG. 9 is a flow chart diagram 130 showing a process used by the data selection processor 48 for storing the data corresponding to a particular characteristic maneuver. This process for the data selection processor 48 can be employed for various characteristic maneuvers, including, but not limited to, a vehicle passing maneuver, a left/rightturn maneuver, a lanechanging maneuver, a Uturn maneuver, vehicle launching maneuver and an on/offramp maneuver, all discussed in more detail below. At start block 132, the algorithm used by the data selection processor 48 reads the Boolean variables Start_flag and End_flag from the maneuver identifier processor 46. If Start_flag is zero or the traffic index Traffic<sub>index </sub>is greater than the traffic threshold δ<sub>th </sub>at decision diamond 134, the data selection processor 48 simply keeps refreshing its data storage to prepare for the next characteristic maneuver at block 136.
If either of the conditions of the decision diamond 134 is not met, then the algorithm determines whether a variable old_Start_flag is zero at block 138. If old_Start_flag is zero at the block 138, the algorithm sets old_Start_flag to one, and starts recording by storing the data between time t<sub>start </sub>and the current time t at box 140. The data can include vehicle speed, longitudinal acceleration, yaw rate, steering angle, throttle opening, range, range rate and processed information, such as traffic index and road condition index.
If old_Start_flag is not zero at the block 138, the data selection processor 48 is already in the recording mode, so it then determines whether the maneuver has been completed. Particularly, the algorithm determines whether End_flag is one at block 142 and, if so, the maneuver has been completed. The algorithm then resets old_Start_flag to zero at box 144, and determines whether the maneuver identifier value M<sub>id </sub>is zero at decision diamond 146. If the maneuver value M<sub>id </sub>is not zero at the decision diamond 146, then the data selection processor 48 outputs the recorded data, including the value M<sub>id</sub>, and increases the maneuver sequence index M<sub>seq</sub>=M<sub>seq</sub>+1 at box 148. The data selection processor 48 also stores the data between the time t<sub>start </sub>and the time tend together with the values M<sub>seq </sub>and M<sub>id</sub>, and sets a variable data_ready=1 to inform the skill characterization processor 52 that the recorded data is ready. The algorithm then begins a new session of data recording at box 150.
If End_flag is not one at the block 142, the maneuver has not been completed, and the data selection processor 48 continues storing the new data at box 152.
The collected data is then used to determine the driving skill, where the Boolean variable data will be used by the skill characterization processor 52 to identify a classification process.
Curvehandling maneuvers are one type of the characteristic maneuvers that can be used to characterize a driver's driving skill. Various other types of characteristic maneuvers include straightline driving left and right turns, vehicle launching and stopping, lane changes, and so on. Generally, the signals or measurements that most reveal the driving skill can differ from one maneuver to another. As a result, the corresponding original features, transformed features, final features, and the skill classifiers will also be different. Each of the skill characterization modules is designed to classify a specific type of characteristic maneuvers. Whenever a characteristic maneuver is detected, the invehicle measurements are collected accordingly andthese signals/measurements are input to the skill characterization module that is designed for the type of that characteristic maneuver. The chosen skill characterization module then classifies the input pattern, i.e, the newly detected characterization maneuver, and output the corresponding skill level. For example, upon the detection of a curvehandling maneuver, the invehicle measurements are collected until the vehicle exits the curve. The newly collected measurements are input to the skill characterization module corresponding to curvehandling maneuvers. Accordingly, the skill characterization module corresponding to curvehandling maneuvers derives original features from those measurements, extract and select final features, and classify the pattern (represented by the final features) to generate a new classification result of skill level. While the output of that specific skill characterization module is updated, all other skill characterization module maintain their existing results, which are generated based on previous characteristic maneuvers. The decision fusion module then combines the new results with the existing results and updates its final decision.
In the real world, factors such as traffic conditions, and road/environmental conditions can affect a driver's driving performance. If such factors are untreated, the driving skill characterization will reflect their influence. In other words, a driver who is characterized as a typical driver in normal weather may be characterized as a lowskill driver in bad weather. This invention describes means to incorporate the traffic and road/environmental conditions into the skill characterization so as to provide robust skill characterization.
According to one embodiment of the present invention, the skill characterization processor 52 classifies a driver's driving skill based on discriminant features. Although various classification techniques, such as fuzzy logic, clustering, neural networks (NN), selforganizing maps (SOM), and even simple thresholdbase logic can be used, it is an innovation of the present invention to utilize such techniques to characterize a driver's driving skill. To illustrate how the skill characterization processor 52 works, an example of skill classification based on fuzzy Cmeans (FCM) can be employed.
FIG. 10 is a flow chart diagram 160 showing such a fuzzy Cmeans process used by the skill characterization processor 52. However, as will be appreciated by those skilled in the art, any of the before mentioned classification techniques can be used for the skill classification. Alternatively, the discriminants can be further separated into smaller sets and classifiers can be designed for each set in order to reduce the dimension of the discriminant features handled by each classifier.
Data is collected at box 162, and the algorithm employed in the skill characterization processor 52 determines whether the variable data_ready is one at decision diamond 164, and if not, the process ends at block 166. If data_ready is one at the decision diamond 164, the algorithm reads the recorded data from the data selection processor 48 at box 168 and changes data_ready to zero at box 170. The algorithm then selects discriminant features for the identified maneuver at box 172. The process to select discriminate features can be broken down into three steps, namely, deriving/generating original features from the collected data, extracting features from the original features, and selecting the final discriminate features from the extracted features. The algorithm then selects the classifier for the particular maneuver and uses the selected classifier to classify the maneuver at box 174. The processor then outputs the time or temporal index N, the skill (N) value of the assessed skill level at the Nth maneuver, the traffic index Traffic<sub>index</sub>, the road condition index Road<sub>index </sub>and the maneuver identifier value M<sub>id </sub>at box 176.
The skill characterization processor 52 can employ characterizers that determine the driving skill of the driver based on different features and different classification algorithms. In one nonlimiting embodiment there are two characterizers each having specific feature extractors and classifiers. FIG. 11 is a flow chart diagram 600 showing a method for processing content of a feature extractor in a characterizer in the skill characterization processor 52. The process starts at box 602, and a first characterizer identifies driver driving skill based on the autoregressive (AR) coefficients of sensor signals collected during a steeringengaged maneuver at box 604. For example, given the speed during a steeringengaged maneuver as a finite set of data, for example, v<sub>x</sub>(t<sub>k</sub>), k=1, 2, . . . N, the speed can be approximated by a qth order AR model such that v<sub>x</sub>(t<sub>k</sub>)=α<sub>1</sub>v<sub>x</sub>(t<sub>k1</sub>)+α<sub>2</sub>v<sub>x</sub>(t<sub>k2</sub>)+ . . . α<sub>q</sub>v<sub>x</sub>(t<sub>kq</sub>), where α<sub>1</sub>, α<sub>2 </sub>and α<sub>q </sub>are the coefficients of the AR model. Usually, the order of the AR model is much smaller than the length of the data, i.e., q<<N, therefore, the characteristics of the speed can be represented by a few AR coefficients. AR models can be built for each of the sensor signals and the derived AR coefficients are used as the feature data for the characterizer. For example, if 10thorder AR models are used for the yaw rate, the speed, the longitudinal acceleration and a throttle opening signals, the total number of the feature data, i.e., the AR coefficients, will be 10×4=40. In cases where an even smaller number of the feature data is desired, data reduction can be performed on the coefficients at box 606. Data reduction methods, such as primary component analysis (PCA), are wellknown to those skilled in the art do not need to be described in detail herein. The process returns at box 608 to collect data.
A more straightforward feature extraction that can be used in the second characterizer in the processor 52 is to extract signature values of the data, for example, the maximum yaw rate, the entering speed, the minimum speed, the speed drop, and how much time the driver applied certain percentage throttle, such as 80%, 70% and 60%, during the steeringengaged maneuver. The advantages of this type of feature extraction include a low requirement on the computation power and a small set of feature data ready to be used by the processor 52.
Various classification methods can be used by the skill characterization processor 52. For example, a neural network can be designed to identify the driver's driving skill. Once designed, the processing is straight forward where the process includes inputting the feature data into the neural network and the neural network outputs the driver's driving skill. However, the design of the classifier usually needs both the input data and the desired output. With the feature data from the feature extractor, the derivation of the desired output becomes a major issue in the classifier design.
FIG. 12 is a block diagram of a classifier 610 that can be used in the skill characterization processor 52 based on such a design. For each steeringengaged maneuver there is a set of feature data, and there needs to be a corresponding driving skill that can be used as the desired output for the neural network training. Since the driving skill for each steeringengaged maneuver is not available, the classification problem is treated as an unsupervised pattern recognition problem and the driving skill associated with each steeringengaged maneuver is derived using data partitioning methods, such as FCM clustering. Thus, the classifier 610 includes a fuzzy clustering process at box 612 that receives a set of features, and those features with a cluster label are trained at box 614.
FIG. 13 is a flowchart diagram 620 showing a method for processing content in the fuzzyclusteringbased data partition of the classifier 610. The sample feature data is organized in an MbyN matrix X as:
<maths id="MATHUS00007" num="00007"><math overflow="scroll"><mtable><mtr><mtd><mrow><mi>X</mi><mo>=</mo><mrow><mo>[</mo><mtable><mtr><mtd><msub><mi>x</mi><mn>11</mn></msub></mtd><mtd><msub><mi>x</mi><mn>12</mn></msub></mtd><mtd><mi>…</mi></mtd><mtd><msub><mi>x</mi><mrow><mn>1</mn><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mi>N</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mi>x</mi><mn>21</mn></msub></mtd><mtd><msub><mi>x</mi><mn>22</mn></msub></mtd><mtd><mi>…</mi></mtd><mtd><msub><mi>x</mi><mn>21</mn></msub></mtd></mtr><mtr><mtd><mi>⋮</mi></mtd><mtd><mi>⋮</mi></mtd><mtd><mi>⋰</mi></mtd><mtd><mi>⋮</mi></mtd></mtr><mtr><mtd><msub><mi>x</mi><mrow><mi>M</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mn>1</mn></mrow></msub></mtd><mtd><msub><mi>x</mi><mrow><mi>M</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mn>2</mn></mrow></msub></mtd><mtd><mi>…</mi></mtd><mtd><msub><mi>x</mi><mi>MN</mi></msub></mtd></mtr></mtable><mo>]</mo></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>11</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where M represents the number of steeringengaged maneuvers and N is the size of the feature data. Each row, [x<sub>i1 </sub>x<sub>i2 </sub>. . . x<sub>iN</sub>] (1≦i≦M), contains the feature data from the steeringengaged maneuver i.
The process starts at box 622 with reading the featuredata matrix X at box 624, and then sets an initial value for the partition number C (eg, C=2) and an intial value for the validity measurement E (eg, E=inf, a very large number) at box 626. The process then continues with an iteration to determine the optimal number of partitions C<sub>opt</sub>, the optimal number of the validity measure E<sub>opt </sub>and the optimal output matrix Y<sub>opt </sub>at box 628 to box 636.
In each iteration, the feature data matrix X is partitioned into C clusters at the box 628, where the FCM clustering outputs the partition matrix Y and the corresponding validity measure E. The process then determines whether E is less than E<sub>opt </sub>at decision diamond 630, and if so, sets C<sub>opt</sub>=C, Y<sub>opt</sub>=Y and E<sub>opt</sub>=E at box 632, otherwise these values stay the same. The algorithm then increases C by 1 at box 634 and determines whether C<10 at decision diamond 636. If C is less than 10 at the decision diamond 636, then the algorithm returns to the box 628 to perform FCM clustering. Otherwise, the algorithm outputs Y<sub>opt </sub>and C<sub>opt </sub>at box 638 and returns to collecting data at box 640.
The optimal partition matrix Y<sub>opt </sub>is then used as the desired output for the classifier design. Alternatively, the optimal partition matrix Y<sub>opt </sub>can be hardened before it is used in the classifier design. The hardening process assigns each steeringengaged maneuver to the class that has the highest y<sub>ik</sub>, i.e., forcing y<sub>ij</sub>=1 if j=arg(max<sub>k1 . . . c</sub><sub><sub2>opt</sub2></sub>(y<sub>ik</sub>)), otherwise y<sub>ij</sub>=0.
If there are multiple characterizers in the processor 52, their decisions will be fused together and with the decisions from previous steeringengaged maneuvers. The decision fusion conducts three tasks, namely, computes a traffic factor for the current decision, keeps a record of the decision history, which contains decisions for all or recent steeringengaged maneuvers, and fuses the current decision with decisions in the history. The traffic factor is used to account for the influence of the traffic condition of the driver's driving behavior. For example, a rough stopandgo vehicle following behavior may be present for a highskilled driver due to the bad behavior of the lead vehicle. Since a short headway distance/time can indicate traffic constrains that limit the driver to a less than normal maneuver, the headway distance/time can be used to calculate the traffic factor. A general rule is to decrease the traffic factor if the headway distance/time is relatively short and vice versa. The traffic factor is used as some form of weighting factors in the decision fusion.
FIG. 14 is a flow chart diagram 650 showing a method for processing content of the decision fuser in the decision fusion processor 56. The process starts at box 652 and reads decisions D=[D<sub>1 </sub>D<sub>2 </sub>. . . D<sub>N</sub>], with D<sub>i</sub>=[p<sub>ki</sub>], (1≦k≦C, 0≦p<sub>ki</sub>≦1) at box 654, where D<sub>i </sub>is the decision of classifier i and p<sub>ki </sub>is the membership degree of the current steeringengaged maneuver in class k, according to classifier i. The fusion process then determines the traffic factor T<sub>f </sub>at box 656 and modifies the decision by multiplying it with the traffic factor D<sub>m</sub>=D×T<sub>f </sub>at box 658. The modified decisions D<sub>m </sub>are stored in a decision history matrix at box 660 before they are fused with decisions in the history. The process then provides fusion with previous decisions at box 662, such as majority vote, fuzzy integral and decision template. The process then outputs the fused decisions at box 664 and returns at box 666.
The traffic and road conditions can be incorporated in the skill characterization processor 52 using three different incorporation schemes. These schemes include a tightlycoupled incorporation that includes the traffic and road conditions as part of the features used for skill classification, select/switch incorporation where multiple classifiers come together with feature extraction/selection designed for different traffic and road conditions and classifiers selected based on the traffic and road conditions associated with the maneuver to be identified, and decoupledscaling incorporation where generic classifiers are designed regardless of traffic and road conditions and the classification results are adjusted by multiplying scaling factors. Tightlycoupled incorporation and selected/switch incorporation are carried out in the skill characterization processor 52 and the decoupledscaling incorporation can be included in either the skill characterization processor 52 or the decision fusion processor 56.
FIG. 15 is a block diagram of the skill characterization processor 52, according to another embodiment of the present invention. The maneuver identifier value M<sub>id </sub>from the maneuver identification processor 46 is applied to a switch 380 along with the recorded data from the data selection processor 48, and the traffic condition index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>from the traffic/road condition recognition processor 50. The switch 380 identifies a particular maneuver value M<sub>id</sub>, and applies the recorded data, the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>to a skill classification processor 382 for that particular maneuver. Each skill classification processor 382 provides the classification for one particular maneuver. An output switch 384 selects the classification from the processor 382 for the maneuvers being classified and provides the skill classification value to the skill profile triplogger 54 and the decision fusion processor 56, as discussed above.
FIG. 16 is a block diagram of a skill classification processor 390 that employs the tightlycoupled incorporation, and can be used for the skill classification processors 382, according to an embodiment of the present invention. In this maneuver classifying scheme, the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>are included as part of the original feature vector. The processor 390 includes an original feature processor 392 that receives the recorded data from the data selection processor 48 and identifies the original features from the recorded data. The original features, the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>are sent to a feature extraction processor 394 that extracts the features. When the features are extracted for the particular maneuver, certain of the features are selected by feature selection processor 396 and the selected features are classified by a classifier 398 to identify the skill.
FIG. 17 is a block diagram of a skill classification processor 400 similar to the classification processor 390 which can be used as the skill classification processors 382, where like elements are identified by the same reference numeral, according to another embodiment of the present invention. In this embodiment, the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>are applied directly to the classifier 398 and not to the feature extraction processor 394. The difference between the classification processor 390 and the classification processor 400 lies in whether the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>are processed through feature extraction and selection. The design process of the feature extraction/selection in the classifiers remains the same regardless of whether the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>are included or not. However, the resulting classifiers are different, and so is the feature extraction/selection if those indexes are added to the original feature vector.
According to one embodiment of the present invention, the driver skill recognition is determined based on behavioral diagnosis. The maneuver identification processor 46 recognizes certain maneuvers carried out by the driver. In this embodiment, the maneuver of the vehicle headway control is used as an illustration for the general notion that driver behavioral diagnosis can be used to detect the driving skill. Maneuvers related to driver headway control behaviors include no preceding vehicle, vehicle following, where the subject vehicle maintains a certain distance from the preceding vehicle, another vehicle cutting in, the preceding vehicle changing lane, and the subject vehicle changing lane. Among these five maneuvers, every maneuver but the first will be used to characterize a driver's driving skill.
The aforementioned five maneuvers can be identified based on measurements of invehicle motion sensors (e.g., speed sensors) and measurements from a forwardlooking radar, and/or a forwardlooking camera, and/or DGPS with intervehicle communication. As an example, this invention described maneuver identification with a forwardlooking radar. The forwardlooking radar is usually mounted at the front bumper of the vehicle. The radar detects objects in front and measures the range, range rate, and azimuth angle of each object. Such objects include the preceding vehicle, which shares the same lane with the subject vehicle, forward vehicles in the adjacent lanes, and other objects, such as a road curb or guard rails. The radar measurements can be processed to accurately track multiple vehicles (each labeled with an individual track ID), and a primary target is assigned to the preceding vehicle, i.e., primary target ID=track ID of the preceding vehicle. Various tracking and data association methods have been developed for this purpose. Such methods are wellknown to those skilled in the art and are not included in this invention.
The maneuver identification processor 46 first excludes the fifth type of maneuver, for example, by detecting the lane change of the subject vehicle through the detection of lane crossing. Given the subject vehicle does not change lane, the first four maneuvers can be identified based on information of multiple tracks and the primary target ID. If the primary target ID is null, there is not preceding vehicle. If the primary target ID does not change or the range corresponding to the primary target ID does not change much, the maneuver is identified as vehicle following. If the primary target ID changes to another track ID that has a noticeably small range, another vehicle has cut in. On the other hand, if the primary target ID changes to another track ID or a new track ID with a noticeably larger range, or the primary target ID changes to null, the preceding vehicle moves out of the lane.
In addition, the maneuver identification processor 46 also determines the beginning and the end of a maneuver. For example, if a vehicle cuts in, the primary ID will change to a track ID with a smaller range, and the time it changes is marked as the beginning of the maneuver. Since the subject vehicle usually decelerates to increase the range to a level comfortable to its driver, the end of the maneuver is then determined based on the settling time of the range and the deceleration.
The skill classification based on headway control behaviors utilizes the data corresponding to three of the five maneuvers, namely, vehicle following, another vehicle cutting in, and the preceding vehicle changing lane. The other two maneuvers, no preceding vehicle and the subject vehicle changing lane, are either of little use or involved in more complicated analysis. Therefore, no further processing is engaged.
During steadystate vehicle following, the drivers main purpose in headway control is to maintain his or her desired headway distance or headway time (the time to travel the headway distance). Therefore, the acceleration and deceleration of the subject vehicle mainly depend on the acceleration and deceleration of the preceding vehicle, while the headway distance/time is a better reflection of the driver's skill. Hence, the average headway distance (or headway time), the average velocity of the vehicle, the traffic index, and the condition index (including the road type index and the ambient condition index) are used as discriminants in the classification. A neural network can be designed for the classification. The net has an input layer with five input neurons (corresponding to the five discriminants), a hidden layer, and an output layer with one neuron. The output of the net ranges from 1 to 5, with 1 indicating a lowskill driver, 3 a typical driver and 5 a highskill driver. The design and training of the neural network is based on vehicle test data with a number of drivers driving under various traffic and road conditions.
During the closingin period, the signals used for classification are the range rate, the time to close the following distance (i.e., range divided by range rate), vehicle acceleration/deceleration, and vehicle speed. The decrease of the following distance may be due to the deceleration of the preceding vehicle or the acceleration of the subject vehicle. Therefore, the skill index should be larger if it is due to the acceleration of the subject vehicle. Since all these signals are timedomain series, data reduction is necessary in order to reduce the complexity of the classifier. One selection of discriminants includes the minimum value of the headway distance, the minimum value of the range rate (since the range rate is now negative), the minimum value of the time to close the gap (min(headway distance/range rate)), average speed, the sign of the acceleration (1 for acceleration, −1 for deceleration), and the traffic and road indexes. Similarly, a neural network is designed, with six neurons in the input layer and one in the output layer. Again, the design and training are based on vehicle test data with drivers driving under various traffic and road conditions.
FIG. 18 shows a system 330 illustrating an example of such a process maneuver model. Vehicle data from a vehicle 332 is collected to be qualified and identified by a maneuver qualification and identification processor 334. Once the data is qualified and the maneuver is identified, a maneuver index and parameter processor 336 creates an index and further identifies relevant parameters for the purpose of reconstruction of the intended path. These parameters can include the range of yawrate, lateral acceleration the vehicle experienced through the maneuver, vehicle speed, steering excursion and the traffic condition index Traffic<sub>index</sub>. The maneuver index processor 336 selects the appropriate maneuver algorithm 338 in a path reconstruction processor 340 to reproduce the intended path of the maneuver without considering the specificities of driver character reflected by the unusual steering agility or excessive oversteer or understeer incompatible with the intended path. The one or more maneuvers are summed by a summer 342 and sent to a maneuver model processor 344. Driver control command inputs including steering, braking and throttle controls are processed by a driver input data processor 346 to be synchronized with the output of the maneuver model processor 344, which generates the corresponding control commands of steering, braking and throttle controls of an average driver. The control signal from the maneuver model processor 344 and the driver input data processor 346 are then processed by a driver skill diagnosis processor 348 to detect the driving skill at box 350.
FIG. 19 is a block diagram of a skill classification processor 410 that employs the select/switch incorporation process, and can be used for the skill classification processor 382, according to another embodiment of the present invention. In this embodiment, the classifier used for feature extraction/selection is not only maneuvertype specific, but also is traffic/road condition specific. For example, the traffic conditions can be separated into two levels, light traffic and moderate traffic, and the road conditions can be separated into good condition and moderate condition. Accordingly, four categories are created for the traffic and road conditions and a specific skill classification is designed for each combination of the maneuver type and the four trafficroad condition categories. Once the maneuver has been identified, the skill classification processor 410 selects the appropriate classification based on the traffic/road conditions. The classification includes the selection of the original features, feature extraction/selection and classifiers to classify the recorded maneuver.
In the skill classification processor 410, the traffic index Traffic<sub>index</sub>, the road condition index Road<sub>index </sub>and the recorded data from the data selection processor 48 for a particular maneuver are sent to an input switch 412. The recorded data is switched to a particular channel 414 depending on the traffic and road index combination. Particularly, the combination of the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index </sub>applied to the input switch 412 will select one of four separate channels 414, including a channel for light traffic and good road conditions, light traffic and moderate road condition, moderate traffic and good road conditions, and moderate traffic and moderate road conditions. For each traffic/road index combination, an original features processor 416 derives original features from the data associated with the maneuver, which is collected by the data selection module 48, a feature extraction processor 418 extracts the features from these original features, a feature selection processor 420 further selects the features and a classifier 422 classifies the driving skill based on the selected features. An output switch 424 selects the skill classification for the particular combination of the traffic/road index.
In the select/switch incorporation scheme, the design of the skill characterization processor 52 is both maneuvertype specific and traffic/road condition specific. Therefore, the maneuvers used for the design, which are collected from vehicle testing, are first grouped according to both the maneuver type and the traffic/road condition. For each group of maneuvers, i.e., maneuvers of the same type and with the same traffic/road condition, the skill classification, including selection of original features, feature extraction/selection and the classifiers, is designed. Since the skill classification is designed for specific traffic/road conditions, the traffic and road information is no longer included in the features. Consequently, the design process would be exactly the same as the generic design that does not take traffic/road conditions into consideration. However, the resulting classification will be different because the maneuvers are traffic/road condition specific. Moreover, the number of classifiers is four times that of the generic classifiers. As a result, the select/switch incorporation would require a larger memory to store the classifiers.
For the decoupledscaling incorporation, the skill classification design does not take traffic and road conditions into consideration. In other words, maneuvers of the same type are classified using the same original features, the same feature extraction/selection and the same classifiers. The original features do not include traffic/road conditions. In other words, the skill classification is generic to traffic/road conditions. The classification results are then adjusted using scaling factors that are functions of the traffic/road conditions. For example, if the skill classification of the Nth maneuver is skill (N), where skill (N) is a number representing a level of sporty driving, the adjusted skill can be:
<FORM>skill<sub>adjust</sub>(N)=skill(N)κ(Traffic<sub>index</sub>(N), Road<sub>index</sub>(N)) (12)</FORM>
Where κ (Traffic<sub>index</sub>, Road<sub>index</sub>) is the scaling factor related to traffic/road conditions.
Alternatively, the affects of the traffic and road conditions may be decoupled, for example, by:
<FORM>κ(Traffic<sub>index</sub>, Road<sub>index</sub>)=∝(Traffic<sub>index</sub>)β(Road<sub>index</sub>) (13)</FORM>
The adjusted skill is:
<FORM>skill(N)=Skill(N)∝(Traffic<sub>index</sub>(N))β(Road<sub>index</sub>(N)) (14)</FORM>
The scaling factors are designed so that the skill level is increased for maneuvers under a heavier traffic and/or worse road condition. For example, if the skill is divided into five levels with 1 representing a low driving skill and 5 representing a high driving skill, then skill(N)∈{0,1,2,3,4,5} with 0 representing hardtodecide patterns. Therefore, one possible choice for the scaling factors can be:
<maths id="MATHUS00008" num="00008"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>α</mi><mo></mo><mrow><mo>(</mo><msub><mi>Traffic</mi><mi>index</mi></msub><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>1</mn><mo>,</mo></mrow></mtd><mtd><mrow><mrow><mi>for</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><msub><mi>Traffic</mi><mi>index</mi></msub></mrow><mo>≤</mo><msub><mi>Traffic</mi><mi>light</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><mrow><mn>1.5</mn><mo>×</mo><mfrac><mtable><mtr><mtd><mrow><msub><mi>Traffic</mi><mi>index</mi></msub><mo></mo></mrow></mtd></mtr><mtr><mtd><msub><mi>Traffic</mi><mi>light</mi></msub></mtd></mtr></mtable><mtable><mtr><mtd><mrow><msub><mi>Traffic</mi><mi>heavy</mi></msub><mo></mo></mrow></mtd></mtr><mtr><mtd><msub><mi>Traffic</mi><mi>light</mi></msub></mtd></mtr></mtable></mfrac></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mrow><mi>for</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><msub><mi>Traffic</mi><mi>light</mi></msub></mrow><mo><</mo><msub><mi>Traffic</mi><mi>index</mi></msub><mo><</mo><msub><mi>Traffic</mi><mi>heavy</mi></msub></mrow></mtd></mtr></mtable></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>15</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mrow><mrow><mi>β</mi><mo></mo><mrow><mo>(</mo><msub><mi>Road</mi><mi>index</mi></msub><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>1</mn><mo>,</mo></mrow></mtd><mtd><mrow><mrow><mi>for</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><msub><mi>Road</mi><mi>index</mi></msub></mrow><mo>≥</mo><msub><mi>Road</mi><mi>good</mi></msub></mrow></mtd></mtr><mtr><mtd><mrow><mrow><mn>1.5</mn><mo>×</mo><mfrac><mtable><mtr><mtd><mrow><msub><mi>Road</mi><mi>good</mi></msub><mo></mo></mrow></mtd></mtr><mtr><mtd><msub><mi>Road</mi><mi>index</mi></msub></mtd></mtr></mtable><mtable><mtr><mtd><mrow><msub><mi>Road</mi><mi>good</mi></msub><mo></mo></mrow></mtd></mtr><mtr><mtd><msub><mi>Road</mi><mi>bad</mi></msub></mtd></mtr></mtable></mfrac></mrow><mo>,</mo></mrow></mtd><mtd><mrow><mrow><mi>for</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><msub><mi>Road</mi><mi>bad</mi></msub></mrow><mo><</mo><msub><mi>Road</mi><mi>index</mi></msub><mo><</mo><msub><mi>Road</mi><mi>good</mi></msub></mrow></mtd></mtr></mtable></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>16</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Note that if skill (N)=0,skill<sub>adjust</sub>(N) remains zero.
Equation (14) or (15) will also work if the skill characterization of the Nth maneuver outputs a confidence vector instead of a scalar skill(N)=[conf(0) conf(1) . . . conf(k)]<sup>T</sup>, where conf(i) is the confidence the classifier has in that input pattern belongs to the class c<sub>i</sub>. In this case, the scaling factors in equations (14) and (15) are no longer scalars, but matrixes.
The skill characterization processor 52 can also use headway control behaviors to utilize the data corresponding to three of the five maneuvers, particularly, vehicle following, another vehicle cutting in, and preceding vehicle changing lanes. The other two maneuvers, no preceding vehicle and the subject vehicle changing lanes, are either of little concern or involve more complicated analysis.
The vehicle following maneuver can be broken down into three types of events based on the range rate, i.e., the rate change of the following distance, which can be directly measured by a forwardlooking radar or processed from visual images from a forwardlooking camera. Three types of events are a steadystate vehicle following where the range rate is small, closing in, where the range rate is negative and relatively large, and falling behind, where the range rate is positive and relatively large. Thus, the data for vehicle following can be portioned accordingly based on the range rate.
During steadystate vehicle following, the driver's main purpose in headway control is to maintain his or her headway distance of headway time, i.e., the time to travel the headway distance. Therefore, the acceleration and deceleration of the subject vehicle mainly depends on the acceleration and deceleration of the preceding vehicle, while the headway distance/time is a better reflection of the driver's driving skill. Hence, the average headway distance, or headway time, the average velocity of the vehicle, the traffic index Traffic<sub>index </sub>and the road condition index Road<sub>index</sub>, including the road type index and ambient condition index, are used as the original features in the classification. With these original features, various feature extraction and feature selection techniques can be applied so that the resulting features can best separate patterns of different classes. Various techniques can be used for feature extraction/selection and are well know to those skilled in the art. Since the original features, and thus the extracted features, consist of only five features, all features can be selected in the feature selection process. A neural network can be designed for the classification where the network has an input layer with five input neurons corresponding to the five discriminants, a hidden layer and an output layer with 1 neuron. The output of the net ranges from 15, with 1 indicating a lowskill driver, 3 a typical driver and 5 a highskill driver. The design and training of the neural network is based on vehicle test data with a number of drivers driving under various traffic and road conditions.
During the closingin period, the signals used for classification are the range rate, the time to close the following distance, i.e., the range divided by the range rate, vehicle acceleration/deceleration and vehicle speed. The decrease of the following distance may be due to the deceleration of the preceding vehicle or the acceleration of the subject vehicle. Therefore, the skill index should be larger if it is due to the acceleration of the subject vehicle. Because all of these signals are timedomain series, data reduction is necessary in order to reduce the complexity of the classifier. One selection of original features includes the minimum value of the headway distance, the minimum value of the range rate because the range rate is now negative, the minimum value of the time to close the gap, i.e., the minimum headway distance/range rate, the average speed, the average longitudinal acceleration, and the traffic and road indexes. Similarly, a neural network can be designed with six neurons in the input layer and one in the output layer. Again, the design and training of the neural network is based on vehicle test data with drivers driving under various traffic and road conditions.
The fallingbehind event usually occurs when the subject vehicle has not responded to the acceleration of the preceding vehicle or the subject vehicle simply chooses to decelerate to have a larger following distance. The former case may not reflect the driver's skill while the second case may not add much value since the larger following distance will be used in vehicle following. Hence, no further processing is necessary for this event.
Another vehicle cutting in and preceding vehicle changing lanes are two maneuvers that induce a sudden change in the headway distance/time where the driver accelerates or decelerates so that the headway distance/time returns to his or her desired value. The acceleration and deceleration during such events can reflect driving skill.
When another vehicle cuts in, the subject vehicle usually decelerates until the headway distance/time reaches the steadystate headway distance/time referred by the driver. A lower skilled driver usually takes a longer time to getback to his/her comfort level, while a skilled driver makes such an adjustment faster. Factors that contribute to the driver's decision of how fast/slow to decelerate include the difference between a new headway distance/time and his/her preferred headway distance/time, as well as vehicle speed and road conditions. An exemplary selection of original features consists of the difference between the new headway time, which is the headway time at the instant the cutin occurs, and the driver preferred headway time, i.e., an average value from the vehiclefollowing maneuver, the time to reach the preferred headway time, which can be determined by the settling of the headway time and range rate, the maximum magnitude of range rate, the maximum braking force, the maximum variation in speed ((average speedminimum speed)/average speed), average speed and the road condition index. Similarly, neural networks can be used for the classification.
When the preceding vehicle changes lanes, the following distance suddenly becomes larger. A skilled driver may accelerate quickly and close the gap faster and smother, while a lower skilled driver accelerates slowly and gradually closes the gap with a certain degree of gap fluctuation. Similar to the case above, the original features include the difference between the new headway time, which is the headway time at the instance the preceding vehicle changes out of the lane, and the driver's preferred headway time, the time to reach the preferred headway time, the maximum magnitude of range rate, the maximum throttle, the maximum variation and speed ((maximum speedaverage speed)/average speed), average speed, and the road condition index Road<sub>index</sub>. Again, neural networks can be designed for this classification.
It is noted that although neural networks can be used as the classification technique, the skill characterization processor 52 can easily employ other techniques, such as fuzzy logic, clustering, simple thresholdbased logic, etc.
The maneuvers related to driver's headway control behavior show that the characteristic maneuvers can be properly identified given various invehicle measurements, including speed, yaw rate, lateral acceleration, steering profile and vehicle track using GPS sensors. Once a characteristic maneuver is identified, key parameters can be established to describe, such a maneuver and the intended path can be reconstructed. With this information available, the intended path can be provided to a process maneuver model where human commands of a typical driver can be generated. The maneuver model can be constructed based on a dynamic model of a moderate driver. One example of a construction and use of such a dynamic model is disclosed U.S. patent application Ser. No. 11/398,952, titled Vehicle Stability Enhancement Control Adaptation to Driving Skill, filed Apr. 6, 2006, assigned to the assignee of this application and herein incorporated by a reference.
FIG. 20 is a block diagram of a system 360 showing one embodiment as to how the driving skill diagnosis processor 348 identifies the differences between the driver's behavior and an average driver. The maneuver model command inputs at box 362 for the maneuver model processor 344 are sent to a frequency spectrum analysis processor 364, and the driver command inputs at box 366 from the driver input data processor 346 are sent to a frequency spectrum analysis processor 368. The inputs are converted to the frequency domain by the frequency spectrum analysis processors 364 and 368, which are then sent to a frequency content discrepancy analysis processor 370 to determine the difference therebetween. However, it is noted that other methodologists can be applied to identify the difference between the model and the commands besides frequency domain analysis.
FIG. 21 is a graph with frequency on the horizontal axis and magnitude on the vertical axis illustrating a situation where behavioral differences are identified through the variation of the frequency spectrum. Given a headway control maneuver, the driver may apply the brake in different ways according to a specific driving skill. While an average driver results in the spectrum in one distribution, another driver, such as driverA, shows a higher magnitude in the lowfrequency area and lower magnitude in the highfrequency area. DriverB shows the opposite trend. The differences in these signal distributions can be used to determine the driving skill of the specific driver.
The difference in the frequency spectrum distribution can be used as inputs to a neural network where properly trained persons can identify the proper skill of the driver. The art of using neural networks to identify driving skill given the differences of the frequency spectrum distribution is wellknown to those skilled in the art, and need not be discussed in further detail here. In this illustration, a properly trained neural network classifier can successfully characterize driverA as lowskill and driverB as highskill if the difference is on the spectrum distribution is determined to have completed a predetermined threshold.
The skill characterization processor 52 classifies driving skill based on every single characteristic maneuver and the classification results are stored in a data array in the skill profile triplogger 54. In addition, the data array also contains information such as the time index of the maneuver M<sub>seq</sub>, the type of maneuver identified by the identifier value M<sub>id</sub>, the traffic condition index Traffic<sub>index </sub>and the road condition index Road<sub>index</sub>. The results stored in the triplogger 54 can be used to enhance the accuracy and the robustness of the characterization. To fulfill this task, the decision fusion processor 56 is provided. Whenever a new classification result is available, the decision fusion processor 56 integrates the new result with previous results in the triplogger 54. Various decision fusion techniques, such as a Bayesian fusion and DempsterShafer fusion, can be used and applied in the decision fusion processor 56. To demonstrate how this works, a simple example of weightedaverage based decision is given below.
The decision fusion based on a simple weighted average can be given as:
<FORM>skill<sub>fused</sub>(N)=Σ<sub>i=N−k</sub><sup>N</sup>∝(Traffic<sub>index</sub>(i))β(Road<sub>index</sub>(i))γ(M<sub>—</sub>ID(i))λ<sup>N−i</sup>skill(i) (17)</FORM>
Or equivalently:
<FORM>skill<sub>fused</sub>(N)=∝(Traffic<sub>index</sub>(N))β(Road<sub>index</sub>(N))γ(M<sub>—</sub>ID(N))skill(N)+λSkill<sub>fused</sub>(N−1) (18)</FORM>
Where N is the time index of the most recent maneuver, skill(i) is the skill classification result based on the ith maneuver, i.e., M_seq=i, ∝(Traffic<sub>index</sub>(i)) is a trafficrelated weighting, β(Road<sub>index</sub>(i)) is a road condition related weighting, γ(M_ID(i)) is a maneuvertype related weighting, λ is a forgetting factor (0<λ≦1) and k is the length of the time index window for the decision fusion.
In one embodiment, traffic and road conditions have already been considered in the skill classification process, the decision fusion may not need to incorporate their effect explicitly. Therefore, ∝(Traffic<sub>index</sub>(i)) and β(Road<sub>index</sub>(i)) can be chosen as one. Moreover, if the classification results from different maneuvers are compatible with one another, γ(M_ID(i)) can also be chosen as 1. The decision fusion can then be simplified as:
<FORM>skill(N)=skill(N)+λskill<sub>fused</sub>(N−1) (19)</FORM>
Recommended values for the forgetting factors λ are between 0.9 and 1, depending on how much previous results are valued, Of course, the decision fusion can also take into consideration traffic, road and maneuver types and use the form of equation (19).
According to another embodiment of the invention, when the vehicle is under a stopandgo maneuver, the driving skill can be characterized based on two approaches, namely, braking characteristics during a vehicle stopping maneuver and transmission shift characteristics during vehicle acceleration.
Driving skill can be characterized based on the characteristics of braking maneuver under normal driving conditions. Using this approach, the process first identifies the normaldriving braking condition, and then processes the brake pedal data to extract the discriminating features for characterization of driving skill.
Vehicle braking during normal driving conditions may vary over a wide range, and may also be initiated based on the driver's own selection or forced by the traffic condition in front of the vehicle. In order to characterize the driving skill based on braking maneuver, it will be better to select those conditions most common to majority of the drivers to avoid aberrations. One method is to elect those braking maneuvers with a vehicle deceleration level among these most likely to occur during normal driving, for example, in a metropolitan area during rush hours, the preferred range can be set between 0.2 g and 0.3 g, during a straightline driving condition. The condition of straightline driving can be detected with existing art, and the design of its process is not within the scope of this invention. For a vehicle equipped with a global positioning system (GPS) the location of the vehicle can be determined for a more refined qualifier for the braking maneuver selection depending on the vehicle location. If the vehicle is equipped with a forward distance sensing device to detect the distance and relative velocity with the front lead vehicle, then the method for determining the braking maneuver can further incorporate a condition where the vehicle headway distance to the front vehicle is larger than a predetermined threshold, say, at least one car length away. If the vehicle is further equipped with driving style recognition, then the vehicle headway distance can further be determined based on the headway distance characterized under the driver's normal driving style behavior.
With the qualified normaldriving braking maneuver identified, the time traces of the related data can be processed. The braking data can be brake pedal position, vehicle longitudinal deceleration, total braking force exerted on the vehicle, front axle braking force and rear axle braking force. Each individual signal can be processed independently following the feature extraction method described below, or these signals can be processed jointly with weighting factors attached thereto.
The most preferred signals for the process are brake pedal position and vehicle longitudinal deceleration. For the purpose of explaining the process without losing generality, the brake pedal position will be used in the following description.
The brake pedal position is first processed to form its time derivative, the brake pedal rate. In the second step, frequency analysis is performed on the brake pedal rate. A typical process for discrete Fourier transform can be conducted to find the frequency component of the signal from its DC component, i.e., zero frequency, up to the frequency of data sampling rate.
In order to understand the characteristics of each type of drivers, the brake pedal rate is further processed to obtain its power spectrum density (PSD) across the frequency range. The PSD is then processed through discrete wavelet transform (DWT) for various predetermined frequency bands to uncover the distinctive characteristics of the DWT in each frequency band.
FIG. 22 is a block diagram of a single level DWT 800 including filters 802 including a lowpass filter 804 and a highpass filter 806 for this purpose. The filters 802 receive a signal 810 and provide approximations 812 and details 914.
In a multilevel DWT, similar calculations are taken by treating the upper level approximations as signals. Thus, the lower the level is, the higher the level index, and the lower frequency band associated with the approximations 812 at the level.
At a certain level, the approximations 812 lose high frequency information with respect to its upper level counterpart. The amount of lost energy varies from one driver to another. In addition, these variations are different DWT levels. Thus, according to the invention, the characterization of driving skill can be associated with these variations.
In order to compare energy (L2 norm) calculated from data covering different frequency ranges, it is necessary to normalize energy at each level with respect to energy of the original signal. For example, a 5level DWT can be taken to a pedal rate signal of a driver. The energy of approximations is calculated at each level and normalize them with respect to energy of the original signal. The result is a descending sequence of numbers starting from 1.00. Each one of this sequence is then an energy coefficient of the driver at the corresponding level. The histogram index at each energy coefficient subrange in various levels of DWT can be used as the discriminating features to recognize driving skill.
A typical histogram shown in FIG. 23 depicts how the histogram data is used to recognize driving skill. It is clear that if there is an energy coefficient between 0.75˜0.85, it is more likely to be associated with an average or lowskill driver. In this case, nonexpert drivers (average and lowskill) prevail in this range. Similarly, an energy coefficient bigger than 0.9 is more likely to be associated with an expert driver. The indication of an expert driver will be stronger if the energy coefficient is higher at this level. The underlying physical meaning is that at the corresponding frequency band, expert drivers tend to have less energy loss, i.e., less high frequency maneuvers, when performing a stop action.
Examining each level of DWT at various ranges of energy coefficient, there will be areas where useful information can be extracted to distinguish an expert driver from a nonexpert driver, as illustrated in Table 1. Therefore, discriminating features are identified accordingly.
<tables id="TABLEUS00001" num="00001"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="5"><colspec colname="1" colwidth="49pt" align="left"/><colspec colname="2" colwidth="35pt" align="left"/><colspec colname="3" colwidth="28pt" align="center"/><colspec colname="4" colwidth="56pt" align="center"/><colspec colname="5" colwidth="49pt" align="left"/><thead><row><entry namest="1" nameend="5" rowsep="1">TABLE 1</entry></row><row><entry namest="1" nameend="5" align="center" rowsep="1"/></row><row><entry/><entry>Feature</entry><entry>DWT</entry><entry>Range of Energy</entry><entry>Prevail Driver</entry></row><row><entry>Data Source</entry><entry>Name</entry><entry>level</entry><entry>Coefficient</entry><entry>Type</entry></row><row><entry namest="1" nameend="5" align="center" rowsep="1"/></row></thead><tbody valign="top"><row><entry>Pedal Rate</entry><entry>PF1L3</entry><entry>3</entry><entry>0.71~0.85</entry><entry>Nonexpert</entry></row><row><entry/><entry>PF2L4</entry><entry>4</entry><entry>0.77~0.99</entry><entry>Expert</entry></row><row><entry/><entry>PF2L5</entry><entry>5</entry><entry>0.71~0.99</entry><entry>Expert</entry></row><row><entry namest="1" nameend="5" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
The system applies the same process to other signals, such as vehicle declaration, there will be more features identified. With a collection of the discriminating features, classification of driving skill to distinguish expert and nonexpert driver can be made. There are many classification methods available, such as neural network or fuzzy Cmeans clustering. Each one will be able to render reasonable outcome.
After separation of expert and nonexpert drivers, the same process can be applied to classify whether a nonexpert driver falls into the category of an average driver or a lowskill driver. Consequently, driving skill can be characterized with three types with a twotier process as described above.
In another embodiment, the driving skill recognition is based on straightline drivingbehavior. This process for driving skill recognition includes two parts, namely, identification of driving maneuvers and processing sensor data collected during irrelevant maneuvers. The straightline maneuver can identified through various techniques, such as the magnitude of vehicle yawrate, steering angle and rate, digital map information of the driving environment, etc. There are known techniques for recognition of straightline driving, and thus, it need not be discussed in any further detail here.
When the vehicle is under a straightline maneuver, the driving skill can be characterized based on three approaches, namely, laneposition approach, steering characteristics approach, and trafficenvironment reaction approach. These approaches are described below.
For the laneposition approach, the vehicle is equipped with a laneposition sensing device. The lane position of a vehicle may be determined through a forwardlooking imaging device, such as a camera, to detect the lane marks of the road. For a vehicle equipped with a highresolution GPS sensor and enhanced digital map with lane information, the vehicle lane position may also be determined through the GPS sensor output relative to the map information.
Three variables are first determined as inputs to the process, namely, lane center deviation Cd(t), lane width Lw(t) and road type Rt(t). The time trace of the lane center deviation Cd(t) is processed to determines the driver's lane deviation in various frequency components. This can be achieved using a commonly exercised power spectrum analysis using discrete Fourier transform (DFT). An ideal expert driver will result in zero components in every frequency sample, and the deviation, especially in the nonzero frequency components, signifies the degree of a lower driving skill. For example, a lowskill driver will not be able to maintain a straightline driving, and will be wandering around the center of the lane. This backandforth deviation of the vehicle performance is revealed by the nonzero frequency components CD(f) after processing the lane center deviation Cd(t) data through DFT.
A driving skill index according to the dynamic part of driver performance SI<sub>D </sub>can be generated by a weighted sum of the frequency components CD(f) data as:
<FORM>SI<sub>D</sub>=Σ<sub>i=1</sub><sup>N</sup>CD(f<sub>i</sub>)K<sub>LP</sub>(i) (20)</FORM>
Where N is the number of frequencies sampled in DFT and K<sub>LP</sub>(i) is a series of weights.
The series of weights K<sub>LP</sub>(i) is determined to maximize the differentiation among the desired classes of driving skill based on test data of a test subject with well recognized driving skills. For example, if it is desired to classify drivers into three levels of driving skill, highskill driver, average skill driver and lowskill driver, and use any of the well established artificial intelligence tools such as a neural network process to determine the optimal series of weights K<sub>LP</sub>(i).
Since not all roads are of the same type, driver performance may differ on the various type of the road, especially for the lower skilled drivers. Roads can be a single lane or multiple lanes, one way or bidirectional travel, and lanes can be of different width. Therefore, the road type information and lane width information can be used to further enhance the accuracy of the driving skill recognition. In this process, the algorithm first determines whether the data belongs to the same type of road, and a skill index based on the static part of driver performance SI<sub>S </sub>is performed within the set of data collected within the same type.
The computation of the index SI<sub>S </sub>starts from determining the timeaverage lane center deviation at each corresponding section of the road where the road type and the lane width are the same. Once such a section of road is identified as the driver has gone through the driving from t=T(i) to t=T(i+1) a component of this index SI<sub>s</sub>(i) can be computed, where it is assumed that this is the ith section of the road the driver has traversed through, by first computing the time average of the lane center deviation Cd<sub>—</sub>0, then multiplied by a weighting factor as:
<maths id="MATHUS00009" num="00009"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>Cd_</mi><mo></mo><mn>0</mn><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><mrow><mi>T</mi><mo></mo><mrow><mo>(</mo><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mo>)</mo></mrow></mrow><mo></mo><mrow><mi>T</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow></mfrac><mo></mo><mrow><msubsup><mo>∫</mo><mrow><mi>T</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mrow><mi>T</mi><mo></mo><mrow><mo>(</mo><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mo>)</mo></mrow></mrow></msubsup><mo></mo><mrow><mrow><mi>Cd</mi><mo></mo><mrow><mo>(</mo><mi>t</mi><mo>)</mo></mrow></mrow><mo></mo><mstyle><mspace width="0.2em" height="0.2ex"/></mstyle><mo></mo><mrow><mo></mo><mi>t</mi></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>21</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mrow><mrow><msub><mi>SI</mi><mi>S</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mrow><msub><mi>K</mi><mi>R</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo></mo><mi>Cd_</mi><mo></mo><mn>0</mn><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>22</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where K<sub>R </sub>is a weighting factor as a function of the road type.
The values of this weighting factor are designed to signify the behavior of a lower skilled driver. For example, when a lowskill driver is driving on the leftmost lane of a multiplelane undivided highway, the driver tends to have an average right deviation from the center, and when the same driver drives on the rightmost lane of the road, he/she tends to have the left deviation from the center. Therefore, the sign of this weighting factor is designed to produce positive value of the index for lower skilled drivers. Assuming right deviation is considered positive deviation, typical values of the weighting factor K<sub>R </sub>on various types of road can be illustrated in Table 2.
<tables id="TABLEUS00002" num="00002"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="3"><colspec colname="offset" colwidth="28pt" align="left"/><colspec colname="1" colwidth="112pt" align="left"/><colspec colname="2" colwidth="77pt" align="center"/><thead><row><entry/><entry namest="offset" nameend="2" rowsep="1">TABLE 2</entry></row><row><entry/><entry namest="offset" nameend="2" align="center" rowsep="1"/></row><row><entry/><entry>Road type</entry><entry>K<sub>R</sub></entry></row><row><entry/><entry namest="offset" nameend="2" align="center" rowsep="1"/></row></thead><tbody valign="top"><row><entry/></row></tbody></tgroup><tgroup align="left" colsep="0" rowsep="0" cols="3"><colspec colname="offset" colwidth="28pt" align="left"/><colspec colname="1" colwidth="112pt" align="left"/><colspec colname="2" colwidth="77pt" align="char" char="."/><tbody valign="top"><row><entry/><entry>Single lane, undivided</entry><entry>1.0</entry></row><row><entry/><entry>Single lane, divided</entry><entry>−0.3</entry></row><row><entry/><entry>Multiple Lane, undivided, left most</entry><entry>1.0</entry></row><row><entry/><entry>Multiple Lane, undivided, right most</entry><entry>−1.0</entry></row><row><entry/><entry>Multiple Lane, undivided, middle</entry><entry>0</entry></row><row><entry/><entry>Multiple Lane, divided, left most</entry><entry>−0.3</entry></row><row><entry/><entry>Multiple Lane, divided, right most</entry><entry>0.5</entry></row><row><entry/><entry>Multiple Lane, divided, middle</entry><entry>0</entry></row><row><entry/><entry namest="offset" nameend="2" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
After each section of the SI<sub>s </sub>index components are computed, the algorithm selects only those significant ones, that is, discarding those indices below a predetermined threshold SI<sub>s</sub><sub><sub2>th</sub2></sub>, which is a positive number. An aggregated static index SI<sub>s </sub>is calculated based on the average of those significant components as:
<maths id="MATHUS00010" num="00010"><math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mi>SI</mi><mi>S</mi></msub><mo>=</mo><mrow><mfrac><mrow><msub><mi>K</mi><mi>LW</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mi>M</mi></mfrac><mo></mo><mrow><munderover><mo>∑</mo><mrow><mi>i</mi><mo></mo><mn>1</mn></mrow><mi>M</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><msub><mi>SI</mi><mi>S</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>23</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where K<sub>LW </sub>is a factor for lanewidth multiplication. This factor is larger for narrow lanes and smaller for wider lanes. In one example, it can be a constant divided by the lane width.
A driving skill index based on lane position SI<sub>LP </sub>can then be computed as:
<FORM>SI<sub>LP</sub>=Kd*SI<sub>D</sub>+Ks*SI<sub>S </sub> (24)</FORM>
Where Kd and Ks are predetermined weighting factors.
Driving skill can be recognized using the lane position skill index and established thresholds SI<sub>LP</sub><sub><sub2>—</sub2></sub>1 and SI<sub>LP—</sub>2 as:
 Good driving skill when SI<sub>LP</sub><SI<sub>LP</sub><sub><sub2>—</sub2></sub>1
 Average driving skill when SI<sub>LP1</sub><SI<sub>LP</sub><SI<sub>LP</sub><sub><sub2>—</sub2></sub>2
 Low driving skill when SI<sub>LP</sub><sub><sub2>—</sub2></sub>2<SI<sub>LP </sub>
For the steering characteristics approach, the vehicle is equipped with a steering position sensor, and the steering wheel angle and steering rate can be determined as steering wheel position Sw(t) and steering rate Sr(t).
Then, the time trace of the steering rate Sw(t) is processed to determine its frequency components SW(f), and the time trace of the steering rate Sr(t) is processed to determine its frequency components SR(f). This can be achieved using a commonly exercised power spectrum analysis using a discrete Fourier transform (DFT). An ideal expert driver will result in zero components in all frequency samples of the steering wheel position and steering rate when driving on a straight line. Therefore a nonzero frequency component signifies the degree of a lower driving skill. For example, a lower skilled driver will not be able to maintain straightline driving without noticeable, if not significant, adjustment of the steering wheel, thus resulting in wandering around the center of the lane. Consistent with the same behavior in the lane center deviation, this backandforth deviation from the steering center is then detected by the nonzero frequency components SW(f) and SR(f) after processing Sw(t) and Sr(t) data through DFT.
A driving skill index according to the steering wheel position SI<sub>SW </sub>can be generated by a weighted sum of the SW(f) data as:
<FORM>SI<sub>SW</sub>=Σ<sub>i=1</sub><sup>N</sup>SW(f<sub>i</sub>)K<sub>SW</sub>(i) (25)</FORM>
Where N is the number of frequency samples in the DFT.
The series of weights K<sub>SW</sub>(i) is determined to maximize the differentiation among the desired classes of driving skill based on test data of test subject with well recognized driving skills. For example, if it is desirable to classify drivers into three levels of driving skill, the process can take the components SW(f) data of highskill, average skill and lowskill drivers, and use any of the well established artificial intelligence tools, such as a neural network process to determined the optimal series of weights K<sub>SW</sub>(i).
Likewise, an index according to steering rate can be established using the SR(f) data as:
<FORM>SI<sub>SR</sub>=Σ<sub>i=1</sub><sup>N</sup>SR(f<sub>i</sub>)K<sub>SR</sub>(i) (26)</FORM>
Where N is the number of frequency samples in the DFT.
The series of weights K<sub>SR</sub>(i) is determined to maximize the differentiation among the desired classes of driving skill based on test data of test subject with well recognized driving skills. For example, if it is desired to classify drivers into three levels of driving skill, the process can take the SR(f) data of highskill, average skill and lowskill drivers, and use any of the well established artificial intelligence tools, such as a neural network process to determine the optimal series of weights K<sub>SR</sub>(i).
A driving skill index based on steering characteristics SI<sub>ST </sub>can then be computed as:
<FORM>SI<sub>ST</sub>=Kd*SI<sub>SW</sub>+Ks*SI<sub>SR </sub> (27)</FORM>
Where Kd and Ks are predetermined weighting factors.
Driving skill can be recognized using the steering characteristics skill index and established thresholds SI<sub>ST</sub><sub><sub2>—</sub2></sub>1 and SI<sub>ST</sub><sub><sub2>—</sub2></sub>2 as:
 Good driving skill when SI<sub>ST</sub><SI<sub>ST</sub><sub><sub2>—</sub2></sub>1
 Average driving skill when SI<sub>ST</sub><sub><sub2>—</sub2></sub>1<SI<sub>ST</sub><SI<sub>ST</sub><sub><sub2>—</sub2></sub>2
 Low driving skill when SI<sub>ST</sub><sub><sub2>—</sub2></sub>2<SI<sub>ST </sub>
For the trafficenvironment reaction approach, the driving skill is recognized using the traffic environment sensor to detect the condition of side objects, either static or moving, and correlate such detection with driver's reaction. When driving on a road, while the lane width is designed to be sufficient for safe driving without the risk of collision with objects outside the lane, drivers with lower driving skill have a tendency to move away from the side objects, even without possibility of collision. Therefore, the vehicle equipped with side object sensing means, such as a shortrange radar or ultrasound sensors, can use the sensor information, which indicates the distance to the side objects, to correlate with driver's steering response.
The algorithm first reads the steering rate information Sr(t) and the lane center information Cd(t) as well as the established average lane center deviation if computed as:
<FORM>ΔCd(t)=Cd(t)−Cd<sub>—</sub>0 (28)</FORM>
An index for the traffic environment reaction I<sub>TER </sub>is established as:
<FORM>I<sub>TER</sub>(t)=K<sub>SRR</sub>Sr(t)+K<sub>CdR</sub>ΔCd(t) (29)</FORM>
Where K<sub>SRR </sub>and K<sub>CdR </sub>are predetermined weighting factors.
When the magnitude of I<sub>TER</sub>(t) has exceeded a predetermined threshold I<sub>TER</sub><sub><sub2>—</sub2></sub>th, the algorithm continues to fetch the sensor output data of the side target object detection of the left and right sides, T0<sub>—</sub>1(t) and T0_r(t), respectively. For convenience without losing generality, it is assumed that a positive sign for right side information is provided, then a target object index can be established as:
<FORM>I<sub>T0</sub>(t)=K<sub>T0</sub>(T0<sub>r(t)</sub>−T0<sub>l(t)</sub>) (30)</FORM>
Where K<sub>T0 </sub>is a predetermined scale factor.
A skill index based on the traffic environment reaction SI<sub>TER</sub><sub><sub2>—</sub2></sub>th can be established based on the correlation between the two time of data, I<sub>TER </sub>and I<sub>T0</sub>.
Driving skill can be recognized using the traffic environment reaction skill index SI<sub>TER </sub>and established thresholds SI<sub>TER</sub><sub><sub2>—</sub2></sub>1 and SI<sub>TER</sub><sub><sub2>—</sub2></sub>2 as:
 Good driving skill when SI<sub>TER</sub><SI<sub>TER</sub><sub><sub2>—</sub2></sub>1
 Average driving skill when SI<sub>TER</sub><sub><sub2>—</sub2></sub>1<SI<sub>TER</sub><SI<sub>TER</sub><sub><sub2>—</sub2></sub>2
 Low driving skill when SI<sub>TER</sub><sub><sub2>—</sub2></sub>2<SI<sub>TER </sub>
For vehicles equipped with manual transmission, driving skill can be classified through the consistency of the transmission shift. In this process, an ideal transmission shift map based on throttle position and vehicle speed, such as illustrated in FIG. 24, can be employed. According to the invention, the process to recognize the driving skill includes monitoring the actual transmission shift point exercised by the driver, then compare it to the transmission shift map to identify the shifterror E<sub>k </sub>on the map to the ideal shift line at the kth shift action detected. The shifterror distance can be obtained by first identifying the actual shift point Ps as combined data of vehicle speed and throttle position, as illustrated in FIG. 24. Then, project this shift point to the shift curve to find its projection, Psp. The difference in speed ΔS and the difference in throttle ΔT can be found. The error is computed as:
<FORM>E=√{square root over (ΔT<sup>2</sup>+ΔS<sup>2</sup>)} (31)</FORM>
The effect of the cumulative errors can be accessed through various means, including using a running window for a fixed number of data points, or a lowpass filter using weighted sum of the new data and cumulated past effect:
<FORM>C<sub>k+1</sub>=αC<sub>k</sub>+(1−α)E<sub>k </sub> (32)</FORM>
Where C is the cumulated effect and E is the present error detected.
The number C can be used to distinguish the driver's skill level:
 If C<Cth<img id="CUSTOMCHARACTER00001" he="2.46mm" wi="2.46mm" file="US20100209891A120100819P00001.TIF" imgcontent="character" imgformat="tif"/>Expert drive
 If Cth1<C<Cth2<img id="CUSTOMCHARACTER00002" he="2.46mm" wi="2.46mm" file="US20100209891A120100819P00001.TIF" imgcontent="character" imgformat="tif"/>Average drive
If C>Cth2<img id="CUSTOMCHARACTER00003" he="2.46mm" wi="2.46mm" file="US20100209891A120100819P00001.TIF" imgcontent="character" imgformat="tif"/>Low skill driver
If the vehicle is equipped with driving style recognition, the assessment of the driver can be further refined. Usually a sporty driver prefers a delayed first gear shift, as illustrated in FIG. 25 showing a normal first gear shift line 820 and a sporty first gear shift line 822. If a driver is assessed to be a sporty driver, the transmission shift map to be used for driving skill recognition should also reflect driver's tendency in the first gear to second gear upshift for a more accurate assessment.
Alternatively, the transmission gear shift can be used to recognize driving style. If there is a consistent delayed firsttosecond upshift compared with an ideal transmission shift map, the driver can be identified as a sporty driver. In this case, even without a separate process of driving style assessment for the purpose of driving skill classification, the transmission shift map can be adjusted pertaining to the specific drive for the refined computation of the shift errors.
During a transmission upshift, the transmission output shaft starts out from a higher torque, and ends with a lower torque. FIG. 26 is a graph with time on the horizontal axis and shaft torque on the vertical axis showing a beginning and end of a shift. However, the transition of the torque level from high to low usually is not smooth. At the beginning of the shift when the clutch is high to low, the shift is usually not smooth. At the beginning of the shift when the clutch is disengaged, the output shaft torque has a temporary drop when the driver shifts the gear from one to another. As the upshift gear is being engaged, the driver also engages the clutch to transmit the input shaft torque to the output. The timing of clutch full engagement and the gear engagement can be used to differentiate the driver's manual shift skill. A skillful driver can have these two actions taking place simultaneously to reduce the transmission shift duration, yet having these two actions completed at the same time. Under the ideal condition, the transmission shift is smooth at the end of the shift. If the timing is off from each other, the output shaft will experience a torque excursion commonly known as “transmission shift shock”. The degree of the transmission shift shock can be detected and utilized to characterize the driver's driving skill. If ΔT is the level of shift shock, then the driver skill can be classified as:
 For ΔT<ΔTth1 driving skill is high
 For ΔTth1<ΔT<ΔTth2 driving skill is average
 For ΔTth2<ΔT driving skill is low
Multiple samples can be aggregated for a more accurate estimation of a driver's driving skill based on this approach.
Transmission shaft torque can be measured using any commonly available torque sensor of automotive applications. Alternatively, the torque can be measured at the wheel axle.
In another embodiment, transient driven wheel acceleration at the end of transmission shift can be measured as an alternative to the transmission output shaft torque for the purpose of driving skill characterization.
In another embodiment, transient vehicle longitudinal acceleration at the end of a transmission shift can be measured as an alternative to the transmission output shaft torque for the purpose of driving skill characterization.
During the manual transmission shift, the clutch is first disengaged. While the clutch is disengaged, the driver drops the engine throttle, makes the shift of the gear, and subsequently engages the clutch and engine throttle again. In a well balanced manual transmission gear shift the driver can provide the engine torque just enough and necessary for the clutch engagement. If the engine torque is insufficient, the engine will stall, and the driver can be determined to be a lowskilled driver. On the other hand, when the engine torque is excessively high, as also demonstrated by a higher speed than its target speed at the end of the shift, as illustrated in FIG. 27.
If ΔS is the level of a transmission input shaft speed excursion, which can be computed based on the speed profile recorded during the shift, as illustrated in FIG. 27, then the driver skill can be classified as:
 For ΔS<ΔSth1 driving skill is high
 For ΔSth1<ΔS<ΔSth2 driving skill is average
 For ΔSth2<ΔS driving skill is low
Multiple samples can be aggregated for a more accurate estimation of the driver's driving skill based on this approach.
The time duration of manual gear transmission shift can also be used as a measure for driving skill. A more skillful driver can complete the shift in a shorter time period as opposed to a lower skill driver who takes a longer time to complete the shift under the same situation.
If ΔP is the period of time for a transmission shift, the driver skill can be classified as:
 For ΔP<ΔPth1 driving skill is high
 For ΔPth1<ΔP<ΔPth2 driving skill is average
 For ΔPth2<ΔP driving skill is low
Multiple samples can be aggregated for a more accurate estimation of a driver's driving skill based on this approach.
While each transmission shift deals with, in general, different engine speed and torque requirements from each other, characterization of driving skill using this approach can be implemented in various ways as follows.
First, data of each upshift is used independently as:
 For ΔPui<ΔPth1ui driving skill is high
 For ΔPth1ui<ΔPui<ΔPth2ui driving skill is average
 For ΔPth2ui<ΔPui driving skill is lowWhere ΔPui denotes the period of time for the ith upshift. And where:
 For ΔPdi<ΔPth1di driving skill is high
 For ΔPth1di<ΔPdi<ΔPth2di driving skill is average
 For ΔPth2di<ΔPdi driving skill is low
 Where ΔPdi denotes the period of time for the ith downshift.
In one embodiment, the period of time for a transmission shift can be an aggregated parameter from the upshift and downshift maneuvers. For example, a weighted linear combination of the upshift and downshift time period can be used as a single parameter to represent the average transmission shift time as:
<FORM>ΔP<sub>ave</sub>=Σ<sub>i</sub>c<sub>ui</sub>ΔP<sub>ui</sub>+Σ<sub>i</sub>c<sub>di</sub>ΔP<sub>di </sub> (33)</FORM>
Where c<sub>ui </sub>and c<sub>di </sub>are weighting factors for the upshift and downshift time periods, respectively.
Using each of the abovementioned four approaches to characterize driving skill, the system may be encounter, from timetotime, different determinations of the driver's driving skill among these approaches. Even the same approach may produce different determination from timetotime. Therefore, it is also one purpose of the invention to improve the consistency of the driving skill characterization by processing this information through data fusion.
As discussed, the dynamics of a driver maneuvering a vehicle can be described by a closedloop system 830 depicted in FIG. 28. The system 830 includes driving dynamics 832 and vehicle dynamics 834. In this situation, the closedloop system starts out with a desired command C, being the desired path, desired yaw angle of the vehicle or the desired yaw rate of the vehicle, just to name a few. The vehicle under control responds with an output Y, which is sensed, detected or “felt” by the driver. The driver then detects or estimates the discrepancy between the desired command and the vehicle output, and then forms a perceived error E by comparator 836. Based on the perceived error between the desired command and the vehicle response, the driver “calculates” for a corrective measure U. This corrective measure is the input the driver exercises to the vehicle, for example, the steering angle during a vehicle maneuver. With such an updated input U, and the existing vehicle inherent state the vehicle response output, Y is updated according to the predetermined vehicle dynamics V(s).
The central issue in the drivervehicle interaction described above is how to characterize the driver behavior so that the total drivervehicle dynamic behavior and response can be better understood to design a better vehicle dynamic control to be an integral part of vehicle control enhancement. One approach is illustrated in FIG. 28 where the vehicle dynamics are described apart from the driver's model, and the driver's model contains various parameters to potentially characterize driver's behavior.
A driver dynamic model, such as depicted by the system 830, may contain many of the variables and processes potentially addressing all possible issues of the driver. These variables can be included based on a fundamental understanding of the driver's physiological and psychological capabilities and limitations. Such variables and processes may include, for example, driver's attention span ahead of the vehicle to preview the road and traffic condition, driver's capability to plan for a vehicle path, driver's ability to send the vehicle position along the path, the driver's decision process to determine the steering command. Some of these processes may require more variables and parameters to describe in mathematical terms. Those skilled in the art of dynamic modeling can understand the magnitude of effort it requires to get all the variables and parameters resolved through parameter identification and optimization before the model is complete, if it ever can be completed.
Nevertheless, such a type of model does have made headway to the contribution of the art of driver skill modeling. By examining the driver's preview time and transport delay it does find some useful information correlating these two parameters of various types of drivers.
Another school of thought on driver modeling is to treat the drivervehicle system as one integral dynamics without trying to separate its individual contributions, as depicted in FIG. 29. FIG. 29 shows a system 840 including a vehicledriver crossover model 842 and comparator 844. This type of model is the socalled “crossover model”. The crossover model 842 is represented in a simple form described by two major parameters, namely, crossover frequency ω<sub>c </sub>and time delay T as shown below:
<maths id="MATHUS00011" num="00011"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>G</mi><mo></mo><mrow><mo>(</mo><mi>s</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mfrac><msub><mi>ω</mi><mi>c</mi></msub><mi>s</mi></mfrac><mo></mo><msup><mi></mi><mrow><mo></mo><mi>τs</mi></mrow></msup></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>34</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
This form is well recognized by those skilled in the art of driver modeling. With only two parameters to be identified, developing the driver's model with representative parameters is viable using commonly accepted process of optimization.
While it has been shown to be viable to model a specific driver using the approaches depicted in FIG. 28 or FIG. 29, the question remains whether these models can be used to characterize the driver's skill level based on the driving and vehicle performance. It is therefore a purpose of this invention to design a method to recognize the driver's skill level, not solely based on the concept of a separated dynamics between the driver and vehicle, and not solely based on the concept of a totally combined dynamics either. In this invention, driving skill characterization is achieved based on the data collected from the driver's input and command to the vehicle to reflect the individual dynamics of the driver alone, yet, also based on the data collected from the vehicle as the result of the integrated dynamics of the driver vehicle.
In another embodiment of the invention, the skill characterization is based on a driver's passing maneuvers, which refers to maneuvers where the driver is passing a vehicle. Passing maneuvers can be identified based on steering activity, vehicle yaw motion, the change in vehicle heading direction, lateral and longitudinal accelerations, speed control coordination, and lane position characteristics.
At the beginning of a vehicle passing maneuver, the subject vehicle (SV), or passing vehicle, approaches and follows a slower preceding object vehicle (OV), which later becomes the vehicle being passed. If the driver of the SV decides to pass the slower OV and an adjacent lane is available for passing, the driver initiates the first lane change to the adjacent lane and then passes the OV in the adjacent lane. If there is enough clearance between the SV and the OV, the driver of the SV may initiate a second lane change back to the original lane. Because the skill characterization based on vehicle headway control behavior already includes the vehicle approaching maneuver, the vehicle approaching before the first lane change is not included as part of the passing maneuver. As a result, the passing maneuver starts with the first lane change and ends with the completion of the second lane change. Accordingly, a passing maneuver can be divided into three phases, namely, phase one consists of the first lane change to an adjacent lane, phase two is passing in the adjacent lane and phase three is the second lane change back to the original lane. In some cases, the second phase may be too short to be regarded as an independent phase, and in other cases, the second phase may last so long that it may be more appropriate to regard the passing maneuver as two independent lane changes. This embodiment focuses on those passing maneuvers where a second phase is not too long, such as less than T<sub>th </sub>seconds.
The detection of a passing maneuver then starts with the detection of a first lane change. The lane changes can be detected using vehicle steering angle or yaw rate together with vehicle heading angle from GPS as described above for the embodiment identifying lanechange maneuvers. Alternatively, a lane change can be detected based on image processing from a forwardlooking camera, wellknown to those skilled in the art.
The end of the first lane change is the start of the second phase, i.e., passing in the adjacent lane. The second phase ends when a second lane change is detected. If the SV changes back to its original lane within a certain time period, such as T<sub>th </sub>seconds, the complete maneuver including all three of the phases is regarded as a vehicle passing maneuver. If the SV changes to a lane other than its original lane, the complete maneuver may be divided and marked as individual lanechange maneuvers for the first and third phases. If a certain time passes and the SV does not initiate a second lane change, the maneuver is regarded as uncompleted, however, the first phase may still be used as an individual lanechange maneuver.
Based on the discussion above, FIG. 30 is a flow chart diagram 220 showing a process for identifying a vehicle passing maneuver, according to an embodiment of the present invention. To keep the integrity of the data associated with an identified maneuver, the system keeps recording and refreshing at a certain period, such as T=2S, of data.
The maneuver identifying algorithm begins with reading the filtered vehicle speed signal v and the filtered vehicle yaw rate signal ω from the signal processor 44 at box 222. The maneuver identifying algorithm then proceeds using the Boolean variables Start_flag and End_flag, where Start_flag is initialized to zero and End_flag is initialized to one. The algorithm then determines whether Start_flag is zero at block 224 to determine whether the vehicle 10 is in a passing maneuver. If Start_flag is zero at the block 224, then the algorithm determines whether a lane change has started at decision diamond 226 to determine whether the passing maneuver has started, and if not, returns at box 228 for collecting data. If the algorithm determines that a lane change has started at the decision diamond 226, which may be the first lane change in a passing maneuver, the algorithm sets Start_flag to one, End_flag to zero, the phase to one and timer T<sub>start</sub>=t at box 470.
If Start_flag is not zero at the block 224 meaning that the maneuver has begun, then the algorithm determines whether the maneuver is in the first phase at decision diamond 472. If the maneuver is in the first passing phase at the decision diamond 472, then the algorithm determines whether a lane change has been aborted at block 474. If the lane change has not been aborted at the block 474, the algorithm determines whether the lane change has been completed at block 476, and if not returns to the block 228 for collecting data. If the lane change has been completed at the block 476, the algorithm sets the phase to two, the time t<sub>1end</sub>=t and the time t<sub>2start</sub>=t+Δt at box 478. If the lane change has been aborted at the block 474, meaning that the passing maneuver has been aborted, then the algorithm sets the maneuver identifier value M<sub>id </sub>to zero at box 480, and sets Start_flag to zero, End_flag to one and the phase to zero at box 482.
If the passing maneuver is not in the first phase at the decision diamond 472, then the algorithm determines whether the passing maneuver is in the second phase at decision diamond 484. If the passing maneuver is not in the second phase at the decision diamond 484, the passing maneuver is already in its third phase, i.e., the lane change back to the original lane. Therefore, the algorithm determines whether this lane change has been aborted at the decision diamond 486, and if so, sets the maneuver identifier value Mid to zero at the box 480, and Start_flag to zero, End_flag to one and phase to zero at the box 482.
If the lane change back has not been aborted at the decision diamond 486, the algorithm determines whether the lane change has been completed at decision diamond 488, and if not, returns to box 228 for collecting data. If the lane change has been completed at the decision diamond 488, the algorithm sets the maneuver identifier value M<sub>id </sub>to one, time t<sub>3end</sub>=t, time t<sub>start</sub>=t<sub>1start </sub>and time t<sub>end</sub>=t<sub>3end </sub>at box 490, and sets Start_flag to zero, End_flag to one and the phase to zero at the box 482.
If the passing maneuver is in the second phase at the decision diamond 44, the algorithm determines whether there has been a lane change back to the original lane at decision diamond 492, and if so, sets the passing maneuver phase to three, time t<sub>2end</sub>=t and time t<sub>3start</sub>=t+Δt at box 494. If a lane change back has not started at the decision diamond 492, then the algorithm determines whether the condition time tt<sub>2start</sub>>T<sub>th </sub>has been met at decision diamond 496, and if not, returns to the box 228. If the condition of the decision diamond 492 has been met, then too much time has passed for a passing maneuver, and the algorithm sets the maneuver identifier value M<sub>id </sub>to zero at box 498, and sets Start_flag to zero, End_flag to one and the phase to zero at the box 482.
As the maneuver identifier value M<sub>id </sub>determines the beginning and the end of a maneuver, the data selector 48 stores that data corresponding to the maneuver based on the variables Start_flag, End_flag, M<sub>id</sub>, t<sub>start </sub>and t<sub>end</sub>. When the maneuver identifier value M<sub>id </sub>is set for a vehicle passing maneuver, the data collected is sent to the skill characterization processor 52, and the driver's driving skill for that maneuver is classified. The first and third phases of a vehicle passing maneuver are lane changes. During a lane change, the higher skill driver is more likely to exhibit larger values in vehicle steering angle, yaw rate, lateral acceleration and lateral jerk. Similarly, from the perspective of a longitudinal motion, a higher skill driver usually completes a lane change in a shorter distance and exhibits a larger speed variation and deceleration/acceleration, a shorter distance to its preceding vehicle before the lane change, and a shorter distance to the following vehicle after the lane change. The second phase of a vehicle passing maneuver, passing in the adjacent lane, involves mostly longitudinal control. A driver's driving skill can be revealed by how fast he/she accelerates, the distance the vehicle traveled during the second phase or the time duration, and the speed difference between the subject vehicle and the object vehicle.
Accordingly, a number of discriminants for classifying a passing maneuver can be selected based on this information. For the first phase, i.e., the first lane change, the original discriminant features can be defined as:
 1. The maximum value of the yaw rate max(ω(t<sub>start</sub>:t<sub>end</sub>));
 2. The maximum value of the lateral acceleration max(α<sub>y</sub>(t<sub>start</sub>:t<sub>end</sub>));
 3. The maximum value of the lateral jerk max({dot over (α)}<sub>y</sub>(t<sub>start</sub>:t<sub>end</sub>));
 4. The distance for the lane change to be completed ∫<sub>t</sub><sub><sub2>start</sub2></sub><sup>t</sup><sup><sub2>end</sub2></sup>v<sub>x</sub>(t)dt;
 5. The average speed mean(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>));
 6. The maximum speed variation max(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>))−min(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>));
 7. The maximum braking pedal force/position (or the maximum deceleration);
 8. The maximum throttle percentage (or the maximum acceleration);
 9. The minimum distance (or headway time) to its preceding vehicle (e.g., from a forwardlooking radar/lidar or camera, or from GPS with V2V communications);
 10. The maximum range rate to its preceding vehicle if available (e.g., from a forwardlooking radar/lidar or camera, or from GPS together with V2V communications); and
 11. The minimum distance (or distance over speed) to the following vehicle at the lane the vehicle changes to, if it is available e.g., from a forwardlooking radar/lidar or camera, or from GPS with V2V communications).
For the second phase, the original discriminant features can be:
 1. The maximum throttle percentage max(throttle(t<sub>2start</sub>:t<sub>2end</sub>)) (or longitudinal acceleration max(α<sub>x</sub>(t<sub>2start</sub>:t<sub>2end</sub>));
 2. The average throttle percentage;
 3. The distance traveled ∫<sub>t</sub><sub><sub2>2start</sub2></sub><sup>t</sup><sup><sub2>2end</sub2></sup>v<sub>x</sub>(t)dt; and
 4. The maximum speed variation max(v<sub>x</sub>(t<sub>2start</sub>:t<sub>2end</sub>))−min(v<sub>x</sub>(t<sub>2start</sub>:t<sub>2end</sub>)).
For the third phase, i.e., the second lane change, the original features are similar to those for the first phase with t<sub>1start </sub>and t<sub>1end </sub>replaced with t<sub>3start </sub>and t<sub>3end</sub>. In addition, the total distance the subject vehicle traveled during a passing maneuver can also be added as a discriminant. In summary, the total number of discriminants for one passing maneuver can be n=10+4+10+1=25, or n=11+4+11+1=27 if the distance to the following vehicle is available.
For each recognized vehicle passing maneuver, one set of the original features is derived. This set of original features can be represented as an original feature vector x, an ndimension vector with each dimension representing one specific feature. This original feature vector serves as the input for further feature extraction and feature selection processing.
As mentioned above, various feature extraction methods can be used for classifying a passing maneuver, such as principle component analysis (PCA), linear discriminant analysis (LDA), kernel PCA, generalized discriminant analysis (GDA), etc. In one nonlimiting embodiment, LDA is used, which is a linear transformation where y=U<sup>T</sup>x, and where U is an nbyn matrix and Y is an nby1 vector with each row representing the value of the new feature. The matrix U is determined offline during the design phase.
To further reduce the feature dimension for improved classification efficiency and effectiveness, feature selection techniques are applied to find the subset that yields the best performance is chosen as the final features to be used for classification. For example, the resulting subset may consist of m features corresponding to the {i<sub>1 </sub>i<sub>2 </sub>. . . i<sub>m</sub>}(1≦i<sub>1</sub>≦i<sub>2</sub>≦ . . . ≦i<sub>m</sub>≦n) row of the feature vector y. By writing the matrix U as u=[u<sub>1 </sub>u<sub>2 </sub>. . . u<sub>n</sub>] with each vector being an nby1 vector, and then selecting only the vectors corresponding to the best subset, yields W=[u<sub>i1 </sub>u<sub>i2 </sub>. . . u<sub>im</sub>], an mbyn matrix. Combining the feature extraction and feature selection, the final features corresponding to the original feature vector x can be derived as z=W<sup>T</sup>x.
The skill characterization processor 52 then classifies the driver's driving skill based on the discriminant feature vector z. Classification techniques, such as fuzzy logic, clustering, neural networks (NN), support vector machines (SVM), and simple thresholdbased logic can be used for skill classification. In one embodiment, an SVMbased classifier is used. Because the skill classification involves more than two classes, a multiclass SVM can be employed to design the classifier. A Kclass SVM consists of K hyperplanes: f<sub>k</sub>(z)=w<sub>k</sub>z+b<sub>k</sub>, k=1,2, . . . , k where w<sub>k </sub>and b<sub>k </sub>determined during the design phase based on the test data. The class label c for any testing data is the class whose decision function yields the largest output as:
<maths id="MATHUS00012" num="00012"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>c</mi><mo>=</mo><mrow><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><munder><mi>max</mi><mi>k</mi></munder><mo></mo><mrow><mi>fx</mi><mo></mo><mrow><mo>(</mo><mi>z</mi><mo>)</mo></mrow></mrow></mrow></mrow><mo>=</mo><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><munder><mi>max</mi><mi>k</mi></munder><mo></mo><mrow><mo>(</mo><mrow><mrow><msub><mi>w</mi><mi>k</mi></msub><mo></mo><mi>z</mi></mrow><mo>+</mo><msub><mi>b</mi><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow><mo>,</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mo>,</mo><mn>2</mn><mo>,</mo><mi>…</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo>,</mo><mi>K</mi></mrow></mtd><mtd><mrow><mo>(</mo><mn>35</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
The feature extraction, feature selection and the Kclass SVM are designed offline based on vehicle test data. A number of drivers were asked to drive several instrumented vehicles under various traffic conditions and the sensor measurements were collected for the classification design. For every vehicle passing maneuver, an original feature vector x can be constructed. All of the feature vectors corresponding to vehicle passing maneuvers are put together to form a training matrix X=[x<sub>1 </sub>x<sub>2 </sub>. . . x<sub>L</sub>], where L is the total number of vehicle passing maneuvers. Each row of the matrix X represents the values of one feature variable while each column represents the feature vector of a training pattern. The training matrix X is then used for the design of the skill classification based on vehicle passing maneuvers.
The feature extraction is based on LDA, a supervised feature extraction technique. Its goal is to train the linear data projection Y=U<sup>T</sup>X such that the ratio of the betweenclass variance to the withinclass variance is maximized, where X is an NbyL matrix and U is an NbyN matrix. Accordingly, Y=[y<sub>1 </sub>y<sub>2 </sub>. . . y<sub>L</sub>] is an NbyL matrix, where the new feature vector y<sub>i </sub>still consists of n features. Commercial or opensource algorithms that compute the matrix U are available and wellknown to those skilled in the art. The inputs to those algorithms include the training matrix X and the corresponding class labels. In one embodiment, the class labels can be 15 with 1 indicating a lowskill driver, 3 indicating a typical driver and 5 being a highskill driver. In addition, a class label 0 can be added to represent those hardtodecide patterns. The class labels are determined based on expert opinions by observing the test data. The outputs of the LDA algorithms include the matrix U and the new feature matrix Y.
The feature selection is conducted on the feature matrix Y. In this particular application, because the dimension of the extracted features is relatively small, an exhaustive search can be used to evaluate the classification performance of each possible combination of the extracted features. The new features still consist of n features, and there are Σ<sub>i1</sub><sup>n</sup>C<sup>i</sup><sub>n </sub>possible combinations of the n features. The exhaustive search evaluates the classification performance of each possible combination by designing an SVM based on the combination and deriving the corresponding classification error. The combination that yields the smallest classification error is regarded as the best combination where the corresponding features {i<sub>1 </sub>i<sub>2 </sub>. . . i<sub>m</sub>} determine the matrix [u<sub>i1 </sub>u<sub>i2 </sub>. . . u<sub>im</sub>]. Conveniently, the SVM corresponding to the best feature combination is the SVM classifier. Since commercial or opensource algorithms for SVM designs are wellknown to those skilled in the art, a detailed discussion is not necessary herein.
It is noted that although SVM is used as the classification technique in this embodiment for classifying passing maneuvers, the present invention can easily employ other techniques, such as fuzzy logic, clustering or simple thresholdbased logic. Similarly, other feature extraction and feature selection techniques can be easily employed instead of the LDA and exhaustive search.
Reliable indicators of passing maneuvers include a relatively large vehicle yaw rate and/or a relatively large steering angle. Although a relative large yaw rate (or steering angle) can also be associated with other maneuvers, additional algorithms to distinguish curve handling maneuvers are not necessary since the characterization algorithm is also effective with those other maneuvers. In this embodiment, the yaw rate is used to describe the operation of the data selector, and a steeringanglebased data selector should work in a similar way. To maintain the integrity of the data associated with an identified maneuver, the system keeps recording and refreshing a certain period (for example T=2 s) of data.
The implementation of this process can be made using an onboard vehicle controller containing a microcomputer taking measurements of the vehicle dynamic information and driver's action, such as steering angle, vehicle speed, vehicle yaw rate, vehicle lateral acceleration and any signal those skilled in the art of vehicle dynamics understand and commonly use. For those vehicles equipped with GPS, the vehicle path and heading angle can also be measured to improve the accuracy of driving skill recognition.
FIG. 31 is a block diagram of a vehicle system 900 including a vehicle stability enhancement (VSE) system 902. The VSE system 902 includes a command interpreter 904 and a feedback control processor 912. Both the command interpreter 904 and the feedback control processor 902 receive a driver workload estimate (DWE) index from a driver workload estimator 908 that identifies the DWE index, which is a representation of the driving skill level based on the driving skill characterization discussed above or additional discussions to follow. As will be discussed in detail below, the command interpreter 904 receives certain driver based signals from a driver 906 and provides a desired yaw rate signal r* and a desired sideslip velocity signal V*<sub>y</sub>. The feedback control processor 912 provides a VSE control signal that controls the desired systems in a vehicle 910, such as differential braking, active front steering, vehicle suspension, etc. The measured yaw rate signal r from a yaw rate sensor and the measured sideslip velocity signal V<sub>y </sub>from a lateral acceleration sensor are fedback to the feedback control processor 912 to provide a yaw rate error signal of the difference between the desired yaw rate and the measured yaw rate and a sideslip error signal of the difference between the desired sideslip velocity and the measured sideslip velocity. The yaw rate error signal and the sideslip velocity error signal are used by the feedback control processor 912 to generate the VSE control signal.
FIG. 32 is a block diagram of the command interpreter 904. The command interpreter 904 includes a yaw rate command generator 920 that outputs the desired yaw rate signal r* based on the driver intent and a sideslip velocity command generator 922 that outputs the desired vehicle sideslip velocity signal V*<sub>y </sub>based on the driver intent. The yaw rate command generator 920 includes a steadystate yaw rate computation processor 924 and the sideslip velocity command generator 922 includes a steadystate sideslip computation processor 926 that receive a handwheel angle (HWA) signal from a handwheel angle sensor and the vehicle speed signal Vx from a vehicle speed sensor. The yaw rate computation processor 924 includes a lookup table that provides a steadystate yaw rate signal based on the handwheel angle signal and the vehicle speed signal Vx and the sideslip computation processor 926 includes a lookup table that provides a steadystate sideslip signal based on the handwheel angle signal and the vehicle speed signal Vx. Those skilled in the art will readily recognize how to generate the lookup tables for this purpose.
The steadystate yaw rate signal is processed by a damping filter 928 in the generator 920 and the steadystate sideslip signal is processed by a damping filter 930 in the generator 922, where the damping filters 928 and 930 are second order filters characterized by a damping ratio ξ and a natural frequency ω<sub>n</sub>. In the known command interpreters for vehicle stability systems, the damping ratio ξ and the natural frequency ω<sub>n </sub>are typically a function of vehicle speed. According to the invention, the damping filter 928 and the damping filter 930 receive a control command adaptation signal from a control command adaptation processor 932 that identifies the damping ratio ξ and the natural frequency ω<sub>n </sub>for a particular DWE index determined by the estimator 908. Particularly, the present invention proposes adapting the damping ratio ξ and the natural frequency ω<sub>n </sub>in the filters 928 and 930 to the workload of the driver so that the VSE system 902 can better control the vehicle 910. As will be discussed in more detail below, lookup tables can be used to identify the damping ratio ξ and the natural frequency ω<sub>n </sub>based on the DWE index and the vehicle speed signal Vx.
The control command adaptation processor 932 also generates a desired yaw rate multiplier M_r* and a desired sideslip multiplier M_V*<sub>y</sub>. The filtered steadystate yaw rate signal from the damping filter 928 is multiplied by the yaw rate multiplier M_r* in a yaw rate command multiplier 934 to provide the desired yaw rate signal r* that has been influenced by the DWE index. Likewise, the filtered steadystate sideslip signal from the damping filter 930 is multiplied by the sideslip multiplier M_V*<sub>y </sub>in a sideslip command multiplier 936 to provide the desired sideslip velocity signal V*<sub>y </sub>that has been influenced by the DWE index.
FIG. 33 is a block diagram of the feedback control processor 912 that receives the desired yaw rate signal r* and the desired vehicle sideslip velocity signal V*<sub>y </sub>from the generators 920 and 922, respectively. The desired yaw rate signal r* and the measured yaw rate signal r are compared in a subtractor 940 to generate the yaw rate error signal Δr. The yaw rate error signal Δr and the vehicle speed signal Vx are applied to a lookup table 942 that provides a yaw rate control gain signal. The yaw rate control gain signal is multiplied by the yaw rate error signal Δr in a multiplier 944 to generate a yaw rate vehicle stability signal VSE<sub>r</sub>. Likewise, the desired sideslip signal V*<sub>y </sub>and the measured sideslip signal V<sub>y </sub>are compared in a subtractor 946 to generate the sideslip error signal ΔV<sub>y</sub>. The sideslip error signal ΔV<sub>y </sub>and the vehicle speed signal Vx are applied to a lookup table 948 that provides a sideslip control gain signal. The sideslip control gain signal and the sideslip error signal ΔV<sub>y </sub>are multiplied by a multiplier 950 to generate a sideslip vehicle stability signal VSE<sub>Vy</sub>.
In the known vehicle stability systems, the yaw rate vehicle stability signal VSE<sub>r </sub>and the sideslip vehicle stability signal VSE<sub>Vy </sub>were added to provide the VSE control component. According to the invention, the DWE index is applied to a control gain adaptation processor 952 that determines a yaw rate multiplier factor K<sub>A</sub><sub><sub2>—</sub2></sub><sub>r </sub>and a sideslip multiplier factor K<sub>A</sub><sub><sub2>—</sub2></sub><sub>Vy</sub>. The yaw rate stability signal VSE<sub>r </sub>and the multiplier factor K<sub>A</sub><sub><sub2>—</sub2></sub><sub>r </sub>are multiplied by a multiplier 954 to generate a modified yaw rate stability signal VSE<sub>rmod</sub>, and the sideslip stability signal VSEyy and the multiplier factor K<sub>A</sub><sub><sub2>—</sub2></sub><sub>Vy </sub>are multiplied by a multiplier 956 to generate a modified sideslip stability signal VSE<sub>Vymod</sub>. The modified yaw rate stability signal VSE<sub>rmod </sub>and the modified sideslip stability signal VSE<sub>Vymod </sub>are then added by an adder 958 to provide the VSE control signal that controls the various stability enhancement components in the vehicle 910, such as differential braking and active steering, as discussed above.
FIG. 34 is a flow chart diagram 960 showing a process for generating the desired yaw rate signal r* in the yaw rate command generator 920 and the desired vehicle sideslip velocity signal V*<sub>y </sub>in the sideslip command generator 922. The control command adaptation processor 932 reads the DWE index from the driver workload estimator at box 962. The algorithm in the control command adaptation processor 930 uses the DWE index and a lookup table to provide the natural frequency ω<sub>n </sub>at box 964 and the damping ratio ξ at box 966.
FIG. 35 is a graph with vehicle speed on the horizontal axis and natural frequency ω<sub>n </sub>on the vertical axis that includes three graph lines 970, 972 and 974. The graph can be used to determine the natural frequency ω<sub>n </sub>based on vehicle speed and the DWE index, where the graph line 970 is for a low DWE index, the graph line 972 is for a medium DWE index and the graph line 974 is for a high DWE index.
FIG. 36 is a graph with vehicle speed on the horizontal axis and damping ratio ξ on the vertical axis that includes three graph lines 976, 978 and 980. The graph can be used to determine the damping ratio ξ based on vehicle speed and the DWE index, where the graph line 926 is for a low DWE index, the graph line 972 is for a medium DWE index and the graph line 980 is for a high DWE index.
The algorithm then uses a lookup table to identify the desired yawrate multiplier M_r* and the desired sideslip multiplier M_V*<sub>y </sub>at boxes 982 and 984, respectively. Table 3 below gives representative examples of these multipliers for the three DWE indexes, where the DWE index 1 is for a low driver workload, the DWE index 2 is for an average driver workload and the DWE index 3 is for a high driver workload. The algorithm then outputs the natural frequency ω<sub>n </sub>and the damping ratio ξ to the dynamic filters 928 and 930 at box 982. The algorithm then outputs the desired yaw rate multiplier M_r* from the filter 928 to the yaw rate command multiplier 934 at box 984 and the desired sideslip multiplier M_V*<sub>y </sub>from the filter 930 to the sideslip command multiplier 936 at box 990.
<tables id="TABLEUS00003" num="00003"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="5"><colspec colname="offset" colwidth="35pt" align="left"/><colspec colname="1" colwidth="28pt" align="left"/><colspec colname="2" colwidth="70pt" align="center"/><colspec colname="3" colwidth="14pt" align="center"/><colspec colname="4" colwidth="70pt" align="center"/><thead><row><entry/><entry namest="offset" nameend="4" rowsep="1">TABLE 3</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row></thead><tbody valign="top"><row><entry/><entry>M_r*</entry><entry>1</entry><entry>0.9</entry><entry>0.8</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row><row><entry/><entry>DWE</entry><entry>1</entry><entry>2</entry><entry>3</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row><row><entry/><entry>M_V<sub>y</sub>*</entry><entry>1</entry><entry>0.8</entry><entry>0.6</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row><row><entry/><entry>DWE</entry><entry>1</entry><entry>2</entry><entry>3</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
FIG. 37 is a flow chart diagram 1000 showing a process for providing the yaw rate feedback multiplier K<sub>Ar </sub>and the lateral dynamic feedback multiplier K<sub>AVy </sub>from the control gain adaptation processor 952. The control gain adaptation algorithm reads the DWE index from the estimator processor 908 at box 1002. The algorithm then determines the vehicle understeer/oversteer coefficient at box 1004. The algorithm then determines whether the vehicle is in an understeer condition at decision diamond 1006, and if so, sets the yawrate feedback multiplier K<sub>Ar </sub>to 1 at box 1008. If there is no understeer condition, then the algorithm goes to a lookup table to provide the yawrate feedback multiplier K<sub>Ar </sub>at box 1010 based on the DWE index. Table 4 below gives representative values of the multiplier K<sub>Ar </sub>for the three DWE indexes referred to above. The algorithm then goes to a lookup table to determine the lateral dynamics feedback multiplier K<sub>AVy </sub>at box 1012 based on the DWE index, which can also be obtained from Table 4. The algorithm then outputs the multipliers K<sub>Ar </sub>and K<sub>AVy </sub>to the multipliers 954 and 956, respectively, at box 1014.
<tables id="TABLEUS00004" num="00004"><table frame="none" colsep="0" rowsep="0"><tgroup align="left" colsep="0" rowsep="0" cols="5"><colspec colname="offset" colwidth="35pt" align="left"/><colspec colname="1" colwidth="28pt" align="left"/><colspec colname="2" colwidth="70pt" align="center"/><colspec colname="3" colwidth="14pt" align="center"/><colspec colname="4" colwidth="70pt" align="center"/><thead><row><entry/><entry namest="offset" nameend="4" rowsep="1">TABLE 4</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row></thead><tbody valign="top"><row><entry/><entry>K<sub>Ar</sub></entry><entry>1</entry><entry>1.2</entry><entry>1.5</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row><row><entry/><entry>DWE</entry><entry>1</entry><entry>2</entry><entry>3</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row><row><entry/><entry>K<sub>AV</sub><sub><sub2>y</sub2></sub></entry><entry>1</entry><entry>1.3</entry><entry>1.6</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row><row><entry/><entry>DWE</entry><entry>1</entry><entry>2</entry><entry>3</entry></row><row><entry/><entry namest="offset" nameend="4" align="center" rowsep="1"/></row></tbody></tgroup></table></tables>
According to another embodiment, when the vehicle is under a left or right turn maneuver, the driving skill can be characterized from four aspects, namely, vehicle yaw and lateral motion during a turn, vehicle speed control coordination in and out of the turn, driver's steering characteristics during the turn, and characteristics of turning trajectories.
FIG. 38 is a flow chart diagram 180 showing a process performed by the maneuver identification processor algorithm to identify a left/rightturn maneuver. In this nonlimiting example, left/rightturns are regarded as a special type of steeringengaged maneuvers where left/rightturns are accompanied with a relatively large maximum yaw rate or steering angle and an approximately 90° change in vehicle heading direction. To keep the integrity of the data associated with the maneuver, the system keeps recording and refreshing at a certain period, for example, T=2 s, of data.
In FIG. 38, the maneuver identifier algorithm begins with reading the filtered vehicle speed signal v and the filtered yaw rate signal ω from the signal processor 44 at block 182. The algorithm then proceeds according to its operation states denoted by the two Boolean variables Start_flag and End_flag, where Start_flag is initialized to zero and Endflag is initialized to one. If Start_flag is zero, then the vehicle 10 is not performing a steeringengaged maneuver. The algorithm determines whether Start_flag is zero at block 184 and, if so, determines whether ω(t)≧ω<sub>med </sub>at decision diamond 186, where ω<sub>med </sub>is 2° per second in one nonlimiting embodiment. If this condition is met, then the vehicle 10 is likely entering a curve or starting a turn, so Start_flag is set to one and End_fag is set to zero at box 188. The algorithm then sets timer t<sub>start</sub>=t−T, and computes the heading angle Φ=ω((t)×Δt) at box 190, where Δt is the sampling time.
If Start_flag is not zero at the block 184 meaning that the vehicle 10 is in a steeringengaged maneuver, the algorithm then determines whether the maneuver has been completed. Upon completion of the steeringengaged maneuver, the algorithm determines whether the steeringengaged maneuver was a left/rightturn or a curvehandling maneuver at block 192 by determining whether max(ω(t−T:t))≦ω<sub>small</sub>, where ω<sub>small </sub>is 1° in one nonlimiting embodiment. If this condition has been met, the steeringengaged maneuver has been completed, so the algorithm sets Start_flag to zero, End_flag to one and time t<sub>end</sub>=t−T at box 194.
The algorithm then determines whether max(ω(t<sub>start</sub>:t<sub>end</sub>))≧ω<sub>large </sub>at block 196 and, if not, sets the identifier value M<sub>id </sub>to zero at box 198 because the yaw rate is too small indicating either the curve is too mild or the vehicle 10 is turning very slowly. Thus, the corresponding data may not reveal much of a driving skill, so the data is discarded. In one nonlimiting embodiment, ω<sub>large </sub>is 7° per second. If the condition of the block 196 is met, meaning that the curve is significant enough, the algorithm determines whether 75°≦Φ≦105° and determines whether time t<sub>end</sub>−t<sub>start</sub><t<sub>th </sub>at the decision diamond 200. In one nonlimiting embodiment, time threshold t<sub>th </sub>is 15 seconds. If both of these conditions are met, then the algorithm determines that a left/rightturn has been made and sets the maneuver value M<sub>id </sub>to 2 at box 202.
If either of these conditions has not been met at the decision diamond 200, then the algorithm determines that the maneuver is a curvehandling maneuver and not a left/rightturn maneuver, and thus sets the maneuver value M<sub>id </sub>to 1 at box 204 indicating the curvehandling maneuver.
If the condition of block 192 has not been met, the vehicle 10 is still in the middle of a relatively large yaw motion or turn, and thus, the algorithm updates the heading angle at box 206 as Φ=Φ+ω(t)×Δt. As the maneuver identification processor 46 determines the beginning and end of the maneuver, the data selection processor 48 stores the corresponding data segment based on the variables Start_flag, End_flag, tax and tend.
The skill classification consists of two processing steps, namely, feature processing that derives discriminant features based on the collected data and classification that determines the driving skill based on the discriminants. The first step, feature processing, reduces the dimension of the data so as to keep the classifier efficient and the computation economic. Feature processing is also critical because the effectiveness of the classification depends heavily on the selection of the right discriminants. These discriminants are then used as the input to the classifier. Various classification techniques, such as fuzzy logic, neural networks, selforganizing maps, and simple thresholdbased logic can be used for the skill classification. The discriminants are chosen based on engineering insights and decision tree based classifiers are designed for the classification.
In this embodiment for classifying a left/rightturn maneuver, the skill characterization processor 52 receives the maneuver value M<sub>id </sub>as two from the maneuver identification processor 46 and the skill classification processor 52 selects the corresponding process classification to process this information. As above, the skill characterization processor 52 includes two processing steps. The left/rightturn maneuver involves both lateral motion and longitudinal motion. The lateral motion is generally represented by the steering angle, the yaw rate and the lateral acceleration. Typically, the higher the skill a driver is, the larger these three signals will be. The longitudinal motion is usually associated with the throttle and braking inputs and the longitudinal acceleration. Similarly, the higher the skill the driver is, the larger these three signals can be. Therefore, all six signals can be used for skill classification. Accordingly, the following original features/discriminants can be chosen for classifying a left/rightturn maneuver:
1. The maximum lateral acceleration α<sub>y max</sub>=max(α<sub>y</sub>(t<sub>start</sub>:t<sub>end</sub>));
2. The maximum yaw rate ω<sub>max</sub>=max(ω(t<sub>start</sub>:t<sub>end</sub>));
3. The maximum longitudinal acceleration α<sub>x max</sub>=max(α<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>));
4. The maximum throttle opening Throttle<sub>max</sub>=max(Throttle(t<sub>start</sub>:t<sub>end</sub>)); and
5. The speed at the end of the turn v<sub>x</sub>(t<sub>end</sub>).
If the vehicle 10 starts turning without stopping fully (min(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>)))<2 m/s, the maximum braking force/position Braking<sub>max</sub>=max(Braking(t<sub>start</sub>:t<sub>end</sub>)) and the minimum speed min(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>)) during the turn are included as the original features/discriminants.
For simplicity, the feature extraction and feature selection processes can be removed and the original features can be used directly as the final features/discriminates. These discriminants can be input to a decision tree for skill classification by the processor 52. Decision trees are classifiers that partition the feature data on one feature at a time. A decision tree comprises many nodes connected by branches where nodes that are at the end of branches are called leaf nodes. Each node with branches contains a partition rule based on one discriminant and each leaf represents the subregion corresponding to one class. The feature data representing the left/right turns used for classification is labeled according to the leaves it reaches through the decision tree. Therefore, decision tress can be seen as a hierarchical way to partition the feature data.
FIG. 39 shows a classification decision tree 210 including nodes 212. A root node 214 of the tree has two branches, one for turns from a stop and the other for turns without a stop. For turns from a stop, the subsequent nodes employ the following partition rules α<sub>ymax</sub><α<sub>ysmall1</sub>, α<sub>ymax</sub>≧α<sub>ylarge1</sub>, Throttle<sub>max</sub>≧Throttle<sub>large1 </sub>and α<sub>ymax</sub>≧α<sub>ylarge2</sub>, and for turns without a full stop, the partition rules are α<sub>ymax</sub><α<sub>ysmall2</sub>, α<sub>ymax</sub>≧α<sub>ylarge2</sub>, Throttle<sub>max</sub>≧Throttle<sub>large2 </sub>and Braking<sub>max</sub>≧Braking<sub>large</sub>. The leaf nodes 216 at the end of the branches 218 represent five driving classes labeled from 1 to 5 in the order of increasing driving skill. Note that all of the discriminants mentioned in the feature extraction are used in the exemplary decision tree 210. Further, the decision tree can be expanded to include more discriminants.
The thresholds in the partition rules are predetermined based on vehicle test data with a number of drivers driving under various traffic and road conditions. The design and tuning of decisiontree based classifiers are wellknown to those skilled in the art and further details need not be provided for a proper understanding. It is noted that although the decision tree is used as the classification technique for classifying a left/rightturn maneuver, the present invention can easily employ other techniques, such as fuzzy logic, clustering and thresholdbased logic to provide the classification.
As discussed above, the maneuver identification processor 46 recognizes certain maneuvers carried out by the vehicle driver. In one embodiment, the skill classification performed in the skill characterization processor 52 is based on a vehicle lanechange maneuver identified by the processor 46. Lanechange maneuvers can be directly detected or identified if a vehicles inlane position is available. The inlane position can be derived by processing information from the forwardlooking camera 20, or a DGPS with submeter level accuracy together with the EDMAP 28 that has lane information. Detection of lane changes based on vehicle inlane position is wellknown to those skilled in the art, and therefore need not be discussed in significant detail herein. Because forwardlooking cameras are usually available in luxury vehicles and midrange to highrange DGPS are currently rare in production vehicles, the present invention includes a technique to detect lane change based on common invehicle sensors and GPS. Though the error in a GPS position measurement is relatively large, such as 58 meters, its heading angle measurement is much more accurate, and can be used for the detection of lane changes.
In a typical lanechange maneuver, a driver turns the steering wheel to one direction, then turns towards the other direction, and then turns back to neutral as he/she completes the lane change. Since the vehicle yaw rate has an approximately linear relationship with the steering angle in the linear region, it exhibits a similar pattern during a lane change. Mathematically, the vehicle heading direction is the integration of vehicle yaw rate. Therefore, its pattern is a little different. During the first half of the lane change when the steering wheel is turning to one direction, the heading angle increases in the same direction. During the second half of the lanechange maneuver, the steering wheel is turned to the other direction and the heading angle decreases back to approximately its initial position.
Theoretically, lanechange maneuvers can be detected based on vehicle yaw rate or steering angle because the heading angle can be computed from vehicle yaw rate or steering angle. However, the common invehicle steering angle sensors or yaw rate sensors usually have a sensor bias and noise that limit the accuracy of the lanechange detection. Therefore, vehicle heading angle is desired to be used together with the steering angle or yaw rate. It can be recognized that a lane change is a special type of a steeringengaged maneuver. To keep the integrity of the data associated with an identified maneuver, the system keeps recording and refreshing a certain period of data, such as T=2 s.
FIG. 40 is a flow chart diagram 90 showing an operation of the maneuver identification processor 46 for detecting lanechange maneuvers, according to an embodiment of the present invention. At a start block 92, the maneuver identifying algorithm begins by reading the filtered vehicle speed signal v, the filtered vehicle yaw rate signal ω and the filtered vehicle heading angle Φ from the signal processor 44. The algorithm then proceeds according to its operation states denoted by two Boolean variables Start_flag and End_flag, where Start_flag is initialized to zero and End_flag is initialized to one. The algorithm then determines whether Start_flag is zero at block 94, and if so, the vehicle 10 is not in a steeringengaged maneuver. The algorithm then determines if any steering activities have been initiated based on certain conditions at block 96, particularly:
<FORM>maxω(t−T:t)≧ω<sub>small</sub>Φ(t−T)≧Φ<sub>small </sub> (36)</FORM>
If the conditions of the block 96 are met, the algorithm sets Start_flag to one and End_flag to zero at box 98. The algorithm then sets a starting time t<sub>start </sub>of the maneuver, and defines the initial heading angle Φ<sub>ini </sub>and an initial lateral position y at box 100 as:
<FORM>Φ<sub>ini</sub>=Φ(t−T) (37)</FORM>
<FORM>y=∫<sub>t−T</sub><sup>t</sup>v<sub>x</sub>(τ)*Sin(Φ(τ))dτ (38)</FORM>
If the conditions of the block 96 are not met, then the vehicle 10 is not involved in a steeringengaged maneuver and Start_flag remains zero, where the process ends at block 102.
The algorithm then returns to the start block 92. If Start_flag is one at the block 94, as set at the block 98, the vehicle 10 is now in a steeringengaged maneuver. If the vehicle 10 is in a steeringengaged maneuver, i.e., Start_Flag=1, the algorithm then determines whether the maneuver has been determined to be a curvehandling maneuver. To do this, the algorithm determines whether the maneuver identifier value M<sub>id </sub>is one at block 104. If the value M<sub>id </sub>is not one at the block 104, then the maneuver has not been determined to a curvehandling maneuver yet. The algorithm then determines if the maneuver is a curvehandling maneuver at block 106 by examining whether:
<FORM>(ω(t)≧ω<sub>med</sub>y>y<sub>large</sub>Φ(t)−Φ<sub>ini</sub>≧Φ<sub>large </sub> (39)</FORM>
In one nonlimiting embodiment, ω<sub>med </sub>is 15°, Φ<sub>charge </sub>is 45° and y<sub>large </sub>is 10 m.
If all of the conditions at block 106 are met, then the maneuver is a curvehandling maneuver and not a lanechanging maneuver. The algorithm then will set the maneuver identifier value M<sub>id </sub>equal to one at block 108 to indicate a curvehandling maneuver.
If all of the conditions are not met at the block 106, then the algorithm updates the vehicle lateral position y at block 110 as:
<FORM>y=y+v<sub>x</sub>(t)*sin(Φ(t))*Δt (40)</FORM>
Where Δt is the sampling time.
The algorithm then determines whether the maneuver is complete at block 112 by:
<FORM>Φ(t−T<sub>2</sub>:t)−Φ<sub>ini</sub><Φ<sub>small </sub> (41)</FORM>
Where if T<sub>2</sub>≦T the maneuver is regarded as being complete.
If the condition of block 112 is satisfied, then the algorithm determines whether the following condition is met at block 114:
<FORM>∥y−4<y<sub>small </sub> (42)</FORM>
Where y<sub>small </sub>is 4 m in one nonlimiting embodiment to allow an estimation error and t−t<sub>start</sub>>t<sub>th</sub>. If the condition of the block 114 is met, the maneuver is identified as a lanechange maneuver, where the value M<sub>id </sub>is set to two and the time is set to t<sub>end </sub>at box 116. Otherwise, the maneuver is discarded as a noncharacteristic maneuver, and the value M<sub>id </sub>is set to zero at box 118. Start_flag is then set to zero and End_flag is set to one at box 120.
If the maneuver identifier value M<sub>id </sub>is one at the block 104, the maneuver has been identified as a curvehandling maneuver and not a lanechange maneuver. The algorithm then determines at box 122 whether:
<FORM>maxω(t−T:t)≦ω<sub>small </sub> (43)</FORM>
If this condition has been met, then the curvehandling maneuver has been completed, and the time is set to t<sub>end </sub>at box 124, Start_flag is set to zero and End_flag is set to one at the box 120. The process then returns to the start box 92.
It is noted that the maneuver identifier processor 46 may not detect some lane changes if the magnitude of the corresponding steering angle/yaw rate or heading angle is small, such as for some lane changes on highways. The missed detection of these types of lane changes will not degrade the lanechange based skill characterization since they resemble straightline driving.
As discussed herein, the present invention provides a technique utilizing sensor measurements to characterize a driver's driving skill. Lanechange maneuvers involve both vehicle lateral motion and longitudinal motion. From the lateral motion point of view, the steering angle, yaw rate, lateral acceleration and lateral jerk can all reflect a driver's driving skill. The values of those signals are likely to be larger for a high skilled driver than those for a low skilled driver. Similarly, from the perspective of longitudinal motion, the distance it takes to complete a lane change, the speed variation, the deceleration and acceleration, the distance the vehicle is to its preceding vehicle, and the distance the vehicle is to its following vehicle after a lane change also reflects the driver's driving skill. These distances are likely to be smaller for a highskill driver than those for a lowskill driver. Consequently, these sensor measurements can be used to classify driving skill. However, those signals are not suitable to be used directly for classification for the following reasons. First, a typical lane change usually lasts more than five seconds. Therefore, the collected data samples usually amount to a considerable size. Data reduction is necessary in order to keep the classification efficient and economic. Second, the complete time trace of the signals is usually not effective for the classification because it usually degrades the classification performance because a large part of it does not represent the patterns and is simply noise. In fact, a critical design issue in classification problems is to derive/extract/select discriminant features, referred to as discriminants which best represent individual classes. As a result, the skill characterization processor 52 includes two major parts, namely a feature processor and a skill classifier, as discussed above.
The feature processor derives original features based on the collected data, extracts features from the original features, and then selects the final features from the extracted features. The main objective of deriving original features is to reduce the dimension of data input to the classifier and to derive a concise representation of the pattern for classification. With these original features, various feature extraction and feature selection techniques can be used so that the resulting features can best separate patterns of different classes. Various techniques can be used for feature extraction/selection and are well know to those skilled in the art. However, the derivation of original features typically relies on domain knowledge. The present invention derives the original features based on engineering insights. However, the discussion below of deriving the original features, or original discriminates, should not limit the invention as described herein.
The following original features/discriminants for classifying a lanechange maneuver are chosen based on engineering insights and can be, for example:
 1. The maximum value of the yaw rate max(ω(t<sub>start</sub>:t<sub>end</sub>));
 2. The maximum value of the lateral acceleration max(α<sub>y</sub>(t<sub>start</sub>:t<sub>end</sub>));
 3. The maximum value of the lateral jerk max({dot over (a)}<sub>y</sub>(t<sub>start</sub>:t<sub>end</sub>));
 4. The distance for the lane change to be completed ∫<sub>t</sub><sub><sub2>start</sub2></sub><sup>t</sup><sup><sub2>end</sub2></sup>v<sub>x</sub>(t)dt;
 5. The average speed mean(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>));
 6. The maximum speed variation max(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>))−min(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>));
 7. The maximum braking pedal force/position (or the maximum deceleration);
 8. The maximum throttle percentage (or the maximum acceleration);
 9. The minimum distance (or headway time) to its preceding vehicle (e.g., from a forwardlooking radar/lidar or camera, or from GPS with V2V communications);
 10. The maximum range rate to its preceding vehicle if available (e.g., from a forwardlooking radar/lidar or camera, or from GPS together with V2V communications); and
 11. The minimum distance (or distance over speed) to the following vehicle at the lane the vehicle changes to, if it is available e.g., from a forwardlooking radar/lidar or camera, or from GPS with V2V communications).
Variations of the discriminant features listed above may be known to those skilled in the art. Because the system 40 only has access to information related to the discriminants 110 identified above, the corresponding classifier uses only discriminants 110. Other embodiments, such as the systems 60 and 80, can use all of the discriminants.
Feature extraction and feature selection techniques can then be applied to the original features/discriminants to derive the final features/discriminates, which will be discussed in further detail below. One vector X<sub>i</sub>[x<sub>i1 </sub>x<sub>i2 </sub>. . . x<sub>iN</sub>] for the final discriminants can be formed corresponding to each lanechange maneuver where i represents the ith lanechange maneuver and N is the dimension of the final discriminants. This discriminate vector will be the input to the classifier. As mentioned before, various techniques can be used to design the classifier, for example, fuzzy Cmeans (FCM) clustering. In FCMbased classification, each class consists of a cluster. The basic idea of the FCMbased classification is to determine the class of a pattern, which is represented by a discriminant vector, based on its distance to each predetermined cluster center. Therefore, the classifier first calculates the distances:
<FORM>D<sub>ik</sub>=∥X<sub>i</sub>−V<sub>k</sub>∥<sup>2</sup><sub>A</sub>=(X<sub>i</sub>−V<sub>k</sub>)A(X<sub>i</sub>−V<sub>k</sub>)<sup>T</sup>, 1≦k≦C (44)</FORM>
Where Vk is the center vector of cluster k, A is an N×N matrix that accounts for the shape of the predetermined clusters, C is the total number of predetermined clusters, such as C=3˜5 representing the different levels of skillful driving. The cluster centers Vk and the matrix A are determined during the design phase.
Based on the distances, the algorithm further determines the membership degree of the curved discriminant vector as:
<maths id="MATHUS00013" num="00013"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>μ</mi><mi>ik</mi></msub><mo>=</mo><mfrac><mn>1</mn><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo></mo><mn>1</mn></mrow><mi>C</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><msup><mrow><mo>(</mo><mrow><msub><mi>D</mi><mi>ik</mi></msub><mo>/</mo><msub><mi>D</mi><mi>ij</mi></msub></mrow><mo>)</mo></mrow><mrow><mn>2</mn><mo>/</mo><mrow><mo>(</mo><mrow><mi>m</mi><mo></mo><mn>1</mn></mrow><mo>)</mo></mrow></mrow></msup></mrow></mfrac></mrow><mo>,</mo><mrow><mn>1</mn><mo>≤</mo><mi>k</mi><mo>≤</mo><mi>C</mi></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>45</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where m is a weighting index that is two in one nonlimiting embodiment.
The corresponding lanechange maneuvers are classified as class j if:
<FORM>μ<sub>ij</sub>=max(μ<sub>ik</sub>)(1≦k≦C) (46)</FORM>
Alternatively, the classifier can simply use a hard partition and classify the corresponding lanechange maneuver as the class that yields the smallest distance, such as:
<maths id="MATHUS00014" num="00014"><math overflow="scroll"><mtable><mtr><mtd><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mrow><msub><mi>μ</mi><mi>ij</mi></msub><mo>=</mo><mn>1</mn></mrow><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msub><mi>D</mi><mi>ij</mi></msub><mo>=</mo><mrow><mi>min</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><msub><mi>D</mi><mrow><mi>ik</mi><mo>,</mo></mrow></msub><mo></mo><mn>1</mn></mrow><mo>≤</mo><mi>k</mi><mo>≤</mo><mi>C</mi></mrow><mo>)</mo></mrow></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mrow><msub><mi>μ</mi><mi>ij</mi></msub><mo>=</mo><mn>0</mn></mrow><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msub><mi>D</mi><mi>ij</mi></msub><mo>></mo><mrow><mi>min</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><msub><mi>D</mi><mrow><mi>ik</mi><mo>,</mo></mrow></msub><mo></mo><mn>1</mn></mrow><mo>≤</mo><mi>k</mi><mo>≤</mo><mi>C</mi></mrow><mo>)</mo></mrow></mrow></mrow></mtd></mtr></mtable></mrow></mtd><mtd><mrow><mo>(</mo><mn>47</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
For the skill characterization processor 52 to operate properly, the cluster center Vx and the matrix A need to be predetermined. This can be achieved during the design phase based on vehicle test data with a number of drivers driving under various traffic and road conditions. The lane changes of each participating driver can be recognized as described in the maneuver identifier processor 46 and the corresponding data can be recorded by the data selection processor 48. For each lane change, the discriminant vector X<sub>i</sub>=[x<sub>i1 </sub>x<sub>i2 </sub>. . . x<sub>iN</sub>] can be derived.
Combining all of the discriminant vectors into a discriminant matrix X gives:
<maths id="MATHUS00015" num="00015"><math overflow="scroll"><mtable><mtr><mtd><mrow><mi>X</mi><mo>=</mo><mrow><mo>[</mo><mtable><mtr><mtd><msub><mi>x</mi><mn>11</mn></msub></mtd><mtd><msub><mi>x</mi><mn>12</mn></msub></mtd><mtd><mi>…</mi></mtd><mtd><msub><mi>x</mi><mrow><mn>1</mn><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mi>N</mi></mrow></msub></mtd></mtr><mtr><mtd><msub><mi>x</mi><mn>21</mn></msub></mtd><mtd><msub><mi>x</mi><mn>22</mn></msub></mtd><mtd><mi>…</mi></mtd><mtd><msub><mi>x</mi><mn>21</mn></msub></mtd></mtr><mtr><mtd><mi>⋮</mi></mtd><mtd><mi>⋮</mi></mtd><mtd><mi>⋰</mi></mtd><mtd><mi>⋮</mi></mtd></mtr><mtr><mtd><msub><mi>x</mi><mrow><mi>M</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mn>1</mn></mrow></msub></mtd><mtd><msub><mi>x</mi><mrow><mi>M</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mn>2</mn></mrow></msub></mtd><mtd><mi>…</mi></mtd><mtd><msub><mi>x</mi><mi>MN</mi></msub></mtd></mtr></mtable><mo>]</mo></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>48</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
The matrix A can be an N×N matrix that accounts for difference variances in the direction of the coordinate axes of X as:
<maths id="MATHUS00016" num="00016"><math overflow="scroll"><mtable><mtr><mtd><mrow><mi>A</mi><mo>=</mo><mrow><mo>[</mo><mtable><mtr><mtd><msup><mrow><mo>(</mo><mrow><mn>1</mn><mo>/</mo><msub><mi>σ</mi><mn>1</mn></msub></mrow><mo>)</mo></mrow><mn>2</mn></msup></mtd><mtd><mn>0</mn></mtd><mtd><mi>…</mi></mtd><mtd><mn>0</mn></mtd></mtr><mtr><mtd><mn>0</mn></mtd><mtd><msup><mrow><mo>(</mo><mrow><mn>1</mn><mo>/</mo><msub><mi>σ</mi><mn>2</mn></msub></mrow><mo>)</mo></mrow><mn>2</mn></msup></mtd><mtd><mi>…</mi></mtd><mtd><mn>0</mn></mtd></mtr><mtr><mtd><mi>⋮</mi></mtd><mtd><mi>⋮</mi></mtd><mtd><mi>⋰</mi></mtd><mtd><mi>⋮</mi></mtd></mtr><mtr><mtd><mn>0</mn></mtd><mtd><mn>0</mn></mtd><mtd><mi>…</mi></mtd><mtd><msup><mrow><mo>(</mo><mrow><mn>1</mn><mo>/</mo><msub><mi>σ</mi><mi>N</mi></msub></mrow><mo>)</mo></mrow><mn>2</mn></msup></mtd></mtr></mtable><mo>]</mo></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>49</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
The cluster center can be determined by minimizing an objective function referred to as C—means functional as:
<FORM>J(X;U,V)=Σ<sub>k=1</sub><sup>c</sup>Σ<sub>i=1</sub><sup>M</sup>(μ<sub>ik</sub>)<sup>m</sup>∥X<sub>i</sub>−V<sub>k</sub>∥<sup>2</sup><sub>A </sub> (50)</FORM>
The minimization of such a function is well known, and need not be described in further detail herein. It is noted that although fuzzy clustering is used as the classification technique in this embodiment for classifying the lanechange maneuver, the present invention can easily employ other techniques, such as fuzzy logic, neural networks, SOM, or thresholdbased logic.
According to another embodiment, when the vehicle is under a local Uturn maneuver, the driving skill can be characterized from four aspects, namely, vehicle lane position information, vehicle sideslip angle information and driver's speed control over the Uturn maneuver.
A Uturn maneuver refers to performing a 180° rotation in order to reverse direction of traffic. According to the traffic or geometric design, Uturn maneuvers can be roughly divided into three types, namely, a Uturn from a nearzero speed, continuous Uturns at the end of straightline driving and interrupted Uturns at the end of straightline driving. The first type usually happens at intersections where Uturns are allowed. The vehicle first stops at the intersection and then conducts a continuous Uturn to reverse direction. Because the vehicle starts from a nearzero speed and the Uturn is a rather tight maneuver, such a Uturn may not be affective in providing a driver's driving skill.
The second type usually occurs when there is no traffic sign and the opposite lane is available. This type of Uturn can reveal a drivers driving skill through the drivers braking control and the vehicle deceleration right before the Uturn and the vehicle yaw and lateral acceleration during the Uturn. To perform a Uturn of the third type, the vehicle would turn about 90° and then wait until the opposite lanes become available to continue the Uturn.
The third type of Uturn may or may not be useful in reviewing the drivers driving skill depending on the associated traffic scenarios. For example, if the opposite traffic is busy, the vehicle may need to wait in line and move slowly during the large portion of the Uturn. In such situations, even a highskill driver will be constrained to drive conservatively.
The present invention focuses mainly on the second type of Uturn, i.e., a continuous Uturn at the end of straightline driving. However, similar methodologies can be easily applied to the other types of Uturns for the skill characterization. A Uturn maneuver can be identified based on the drivers steering activity in the corresponding change in the vehicle heading direction.
An example of the recognition of a vehicle Uturn maneuvers, together with recognition of curvehandling maneuvers can also be provided by the flow chart diagram 180. In this example, the Uturn maneuver is regarded as a special type of left/rightturn maneuver where the Uturn is accompanied with a relatively large maximum yaw rate or steering angle and an approximately 180° change in the vehicle heading direction. To keep the integrity of the data associated with an identified maneuver, the system keeps recording and refreshing a certain period, for example, T=2 s, of data.
As with the left/rightturn maneuver discussed above, the maneuver value M<sub>id</sub>=0 represents a noncharacteristic maneuver that will not be used for skill characterization, M<sub>id</sub>=1 is for a curvehandling maneuver and M<sub>id</sub>=2 is for a Uturn maneuver. Instead of the range of 75°105° for the heading angle Φ for the left/rightturn maneuver at decision diamond 200, it is determined whether the heading angle Φ is between 165° and 195° for the Uturn maneuver.
As discussed above, the skill characterization processor 52 receives the maneuver identifier value M<sub>id </sub>from the processor 46. A Uturn maneuver involves both lateral motion and the longitudinal motion. The lateral motion is generally represented by the steering angle, the yaw rate and the lateral acceleration. Typically, the more skillful the driver is, the larger these three signals can be. The longitudinal motion is usually associated with throttle and braking inputs and the longitudinal acceleration. Similarly, the more skillful the driver, the larger these signals typically are. Therefore, all six signals can be used for skill characterization in the processor 52.
The collected data is typically not suitable to be used directly for skill characterization because the collected data consist of the time trace of those signals, which usually results in a fair amount of data. For example, a typical Uturn maneuver lasts more than five seconds. Therefore, with a 10 Hz sampling rate, more than 50 samples of each signal would be recorded. Therefore, data reduction is necessary in order to keep the classification efficient. Also, the complete time trace of those signals is usually not effective for the characterization. In fact, a critical design issue in classification problems is to derive/extract/select discriminative features that best represent individual classes.
Thus, the skill characterization processor 52 includes a feature processor and a skill classifier. As mentioned above, the feature processor derives original features based on the collected data, extracts features from the original features and then selects the final features from the extracted features. Feature extraction tries to create new features based on transformations or combinations of the original features and the feature selection selects the best subset of the new features derived through feature extraction. The original features are usually derived using various techniques, such as timeseries analysis and frequencydomain analysis. These techniques are wellknown to those skilled in the art. The present invention describes a straight forward way to derive the original discriminant features based on engineering insights.
For the six signals referred to above, the original discriminants for classifying a Uturn maneuver can be chosen as:
 1. The maximum lateral acceleration α<sub>y max</sub>=max (α<sub>y</sub>(t<sub>start</sub>:t<sub>end</sub>));
 2. The maximum yaw rate ω<sub>max</sub>=max(ω(t<sub>start</sub>:t<sub>end</sub>));
 3. The speed at the beginning of the Uturn v<sub>x</sub>(t<sub>start</sub>);
 4. The minimum speed during the Uturn v<sub>x min</sub>=min(v<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>));
 5. The speed at the end of the Uturn v<sub>x</sub>(t<sub>end</sub>);
 6. The maximum braking force/position Braking<sub>max</sub>=max(Braking(t<sub>start</sub>:t<sub>end</sub>));
 7. An array of braking index BI<sub>braking</sub>=[BI<sub>1 </sub>. . . BI<sub>i </sub>. . . BI<sub>N</sub>] based on the distribution of the brake pedal position/force;
 8. The maximum longitudinal acceleration α<sub>x max</sub>=max(α<sub>x</sub>(t<sub>start</sub>:t<sub>end</sub>));
 9. The maximum throttle opening Throttle<sub>max</sub>=max(Throttle(t<sub>start</sub>:t<sub>end</sub>)); and
 10. An array of throttle index T<sub>throttle</sub>=[TI<sub>1 </sub>. . . TI<sub>i </sub>. . . TI<sub>N</sub>], based on the distribution of the throttle opening.
Each braking index BI<sub>i </sub>is defined as the percentage of the time when the braking pedal position/force is greater than a threshold B<sub>thi</sub>. That is, if the Uturn maneuver takes time T<sub>total </sub>seconds and during that period of time the braking pedal position/force is greater than B<sub>thi </sub>for T<sub>i </sub>seconds, then the braking index BI<sub>i</sub>=T<sub>i</sub>/T<sub>total</sub>. Alternatively, the time T<sub>total </sub>can be defined as a time when the braking is greater than the braking threshold (Braking>B<sub>th</sub>), where the threshold B<sub>th </sub>is smaller than the threshold B<sub>thi</sub>. Similarly, each throttle index TI<sub>i </sub>is defined as the percentage of the time when the throttle opening a is greater than a threshold α<sub>thi</sub>. Suitable examples of the threshold α<sub>thi </sub>can be 20%, 30%, 40%, 50% and 60% or from 10% to 90% with a 10% interval inbetween. In summary, the total number of discriminants for a Uturn maneuver can be n=8+2N or more if additional discriminants, such as traffic and road indexes, are included.
For each recognized vehicle Uturn maneuver, one set of the original features is derived. This set of original features can be represented as an original feature vector x, an ndimension vector with each dimension representing one specific feature. This original feature vector serves as the input for further feature extraction and feature selection processing. Feature extraction tries to create new features based on transformations or combination of the original features (discriminants), while feature selection selects the best subset of the new features derived through feature extraction.
Various feature extraction methods can be used for classifying a Uturn maneuver, such as principle component analysis (PCA), linear discriminant analysis (LDA), kernel PCA, generalized discriminant analysis (GDA), etc. In one nonlimiting embodiment, LDA is used, which is a linear transformation where y=U<sup>T</sup>x, and where U is an nbyn matrix and y is an nby1 vector with each row representing the value of the new feature. The matrix U is determined offline during the design phase. Note that the LDA transformation does not reduce the dimension of the features.
To further reduce the feature dimension for improved classification efficiency and effectiveness, various feature selection techniques, such as exhaustive search, branchandbound search, sequential forward/backward selection and sequential forward/backward floating search, can be used. The subset that yields the best performance is chosen as the final features to be used for classification. For example, the resulting subset may consist of m features corresponding to the {i<sub>1 </sub>i<sub>2 </sub>. . . i<sub>M</sub>}(1≦i<sub>1</sub>≦i<sub>2</sub>≦ . . . ≦i<sub>m</sub>≦n) row of the feature vector y. By writing the matrix U as u=[u<sub>1 </sub>u<sub>2 </sub>. . . u<sub>n</sub>] with each vector being an nby1 vector, and then selecting only the vectors corresponding to the best subset, yields W=[u<sub>i1 </sub>u<sub>i2 </sub>. . . u<sub>im</sub>], an MbyN matrix. Combining the feature extraction and feature selection, the final features corresponding to the original feature vector x can be derived as z=W<sup>T</sup>x.
The skill characterization processor 52 then classifies the driver's driving skill for the Uturn maneuver based on the discriminant feature vector z. Classification techniques, such as fuzzy logic, clustering, neural networks (NN), support vector machines (SVM), and simple thresholdbased logic can be used for skill classification. In one embodiment, an SVMbased classifier is used. The standard SVM is a twoclass classifier, which tries to find an optimal hyperplane, i.e., the socalled decision function, that correctly classifies training patterns as much as possible and maximizes the width of the margin between the classes. Because the skill classification involves more than two classes, a multiclass SVM can be employed to design the classifier. A Kclass SVM consists of K hyperplanes: f<sub>k</sub>(z)=w<sub>k</sub>z+b<sub>k</sub>,k=1,2, . . . , k where w<sub>k </sub>and b<sub>k </sub>are determined during the design phase based on the test data. The class label c for any testing data is the class whose decision function yields the largest output as:
<maths id="MATHUS00017" num="00017"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>c</mi><mo>=</mo><mrow><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><munder><mi>max</mi><mi>k</mi></munder><mo></mo><mrow><mi>fx</mi><mo></mo><mrow><mo>(</mo><mi>z</mi><mo>)</mo></mrow></mrow></mrow></mrow><mo>=</mo><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><munder><mi>max</mi><mi>k</mi></munder><mo></mo><mrow><mo>(</mo><mrow><mrow><msub><mi>w</mi><mi>k</mi></msub><mo></mo><mi>z</mi></mrow><mo>+</mo><msub><mi>b</mi><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow><mo>,</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mo>,</mo><mn>2</mn><mo>,</mo><mi>…</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo>,</mo><mi>K</mi></mrow></mtd><mtd><mrow><mo>(</mo><mn>51</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
The feature extraction, feature selection and the Kclass SVM are designed offline based on vehicle test data. A number of drivers were asked to drive several instrumented vehicles under various traffic conditions and the sensor measurements were collected for the classification design. For every vehicle Uturn maneuver, an original vector x can be constructed. All of the feature vectors corresponding to vehicle Uturn maneuvers are put together to form a training matrix X=[y<sub>1 </sub>y<sub>2 </sub>. . . y<sub>L</sub>], where L is the total number of vehicle Uturn maneuvers. Each row of the matrix X represents the values of one feature variable while each column represents the feature vector of a training pattern. The training matrix X is then used for the design of the skill classification based on vehicle Uturn maneuvers.
The feature extraction is based on LDA, a supervised feature extraction technique. Its goal is to train the linear data projection y=U<sup>T</sup>X such that the ratio of the betweenclass variance to the withinclass variance is maximized, where X is an nbyL matrix and U is an nbyn matrix. Accordingly, Y=[y<sub>1 </sub>y<sub>2 </sub>. . . y<sub>L</sub>] is an nbyL matrix, where the new feature vector y<sub>i </sub>still consists of n features. Commercial or opensource algorithms that compute the matrix U are available and wellknown to those skilled in the art. The inputs to those algorithms include the training matrix X and the corresponding class labels. In one embodiment, the class labels can be 15 with 1 indicating a lowskill driver, 3 indicating a typical driver and 5 being a highskill driver. In addition, a class label 0 can be added to represent those hardtodecide patterns. The class labels are determined based on expert opinions by observing the test data. The outputs of the LDA algorithms include the matrix U and the new feature matrix Y.
The feature selection is conducted on the feature matrix Y. In this particular application, because the dimension of the extracted features is relatively small, an exhaustive search can be used to evaluate the classification performance of each possible combination of the extracted features. The new features still consist of n features, and there are Σ<sub>i1</sub><sup>n</sup>C<sup>i</sup><sub>n </sub>possible combinations of the n features. The exhaust search evaluates the classification performance of each possible combination by designing an SVM based on the combination and deriving the corresponding classification error. The combination that yields the smallest classification error is regarded as the best combination where the corresponding features {i<sub>1 </sub>i<sub>2 </sub>. . . m} determine the matrix [u<sub>i1 </sub>u<sub>i2 </sub>. . . u<sub>im</sub>]. Conveniently, the SVM corresponding to the best feature combination is the SVM classifier. Since commercial or opensource algorithms for SVM designs are wellknown to those skilled in the art, a detailed discussion is not necessary herein.
It is noted that although SVM is used as the classification technique in this embodiment, the present invention can easily employ other techniques, such as fuzzy logic, clustering or simple thresholdbased logics for classifying Uturn maneuvers. Similarly, other feature extraction and feature selection techniques can be easily employed instead of the LDA and exhaustive search.
According to another embodiment, the skill characterization is based on vehicle highway on/offramphandling maneuvers, which refer to the maneuvers where a vehicle is on highway on/off ramps. In this embodiment, a method for effective differentiation of driving skill from one level to another utilizing measured vehicle data and analyzed time factor and steering gain factor of the driver where the driver is on a highway on/off ramp is proposed. Highway on/offramphandling maneuvers can be identified based on steering activity, vehicle yaw motion, the change in vehicle heading direction, lateral and longitudinal accelerations, speed control coordination, and lane position characteristics.
Reliable indicators of highway on/off ramphandling maneuvers include a relatively large yaw rate (or steering angle), which can also be associated with other maneuvers, such as some lane changes. Additional algorithms to distinguish curvehandling maneuvers are not necessary since the characterization algorithm is also effective with those other maneuvers.
In this embodiment, the yaw rate is used to describe the operation of the data selector, and a steeringanglebased data selector should work in a similar way. To maintain the integrity of the data associated with an identified maneuver, the system keeps recording and refreshing a certain period, for example T−2 s, of data.
Typical highway onramps start with a short straight entry, continue to a relatively tight curve, and then end with a lane merging. Typical highway offramps start with a lane split as the entry portion, continue to a relatively tight curve, and then a short straight road portion and end at a traffic light or a stop sign. Although highway on/off ramps without a curve portion do exist, most maneuvers at highway on/off ramps involve both curvehandling and a relatively long period of acceleration or deceleration. Consequently, maneuvers at highway on/off ramps can be identified based on steering activities, or vehicle yaw motion, and the corresponding change in the vehicle speed.
An example of a process for identifying highway on/offramp maneuvers is shown by a flow chart diagram 230 in FIGS. 41A and 41B, according to one embodiment of the present invention. In this example, the entry portion of the on/off ramp is ignored. That is, on/off ramp maneuvers start with curve handling and vehicle yaw motion, or other steering activities, to determine the start of the maneuver. The onramps are determined based on the speed variation after the curve portion and the offramps are determined based on the speed variation during and after the curve portion. To keep the integrity of the data associated with an identified maneuver, the process keeps recording and refreshing at certain periods, such as (T=2 s), of data. Alternately, if the vehicle is equipped with a forwardlooking camera or a DGPS with an enhanced digital map, the information can be incorporated or used independently to determine when the vehicle is at a highway on/off ramp. Usage of that information for the determination of highway on/off ramps is straight forward and wellknown to those skilled in the art.
Returning to FIGS. 41A and 41B, the maneuver identifier processor 46 begins by reading the filtered vehicle speed signal v and the filtered vehicle yaw rate signal ω from the signal processor 44 at box 232. The maneuver identifier algorithm then proceeds using the Boolean variables Start_flag, End_flag and End_curve_flag, where Start_flag is initialized to zero, End_flag is initialized to one and End_curve_flag is initialized to one. The algorithm determines whether Start_flag is zero at decision diamond 234 to determine whether the vehicle 10 is in a highway on/off ramp maneuver. If Start_flag is zero at the decision diamond 234, then the algorithm determines whether the condition ω(t)≧ω<sub>med </sub>has been met at decision diamond 236, where ω<sub>med </sub>can be 2° per second in one nonlimiting embodiment to determine whether the vehicle 10 is likely entering the curve or starting to turn. If the condition of the decision diamond 236 is not met, then the algorithm returns at block 238 to collecting the data. If the condition of the decision diamond 236 is met, meaning that the vehicle is entering a curve or starting a turn, the algorithm sets Start_flag to one, End_flag to zero, End_curve_flag to zero, timer t<sub>start</sub>=t−T, and the maneuver identifier value M<sub>id </sub>to zero at block 240. The algorithm then returns at the block 238 to collecting data.
If Start_flag is not zero at the decision diamond 234, meaning that the vehicle 10 is in a potential highway on/off ramp maneuver, then the algorithm determines whether End_curve_flag is zero at decision diamond 242. If End_curve_flag is zero at the decision diamond 242; meaning that the vehicle 10 is in the curve portion of the potential on/off ramp maneuver, the algorithm then determines whether the curve portion maneuver has been completed. Particularly, the algorithm determines whether the condition max(ω(t−T:t))≦ω<sub>small </sub>has been met at decision diamond 244, and if so, meaning that the curve portion maneuver has been completed, sets End_curve_flag to one and time t<sub>end</sub><sub><sub2>—</sub2></sub><sub>curve</sub>=t−T at block 246. In one nonlimiting embodiment, ω<sub>small </sub>is 1° per second.
The algorithm also determines vehicle speed information, particularly, whether the condition v<sub>x</sub>(t)−v<sub>x</sub>(t<sub>start</sub>)≦−v<sub>max </sub>is met at decision diamond 248, and if so, meaning that the curve portion is possibly part of an offramp maneuver, sets the maneuver identifier value M<sub>id </sub>to 2 at box 250. If the conditions of the decision diamonds 244 and 248 are not met, then the algorithm returns to collecting data at block 238 where the vehicle 10 is still in the middle of a relatively large yaw motion, and thus, the processor 46 waits for the next data reading. If the condition of the decision diamond 248 is not met, the curvehandling maneuver might be part of an onramp maneuver, where the maneuver identifier value M<sub>id </sub>stays at zero. In one nonlimiting example, the speed v<sub>max </sub>can be 25 mph.
If End_curve_flag is one at the decision diamond 242, meaning that the curve portion has been completed, the algorithm determines whether time t−t<sub>end</sub><sub><sub2>—</sub2></sub><sub>curve</sub><T<sub>large </sub>at block 252, for example, T<sub>large</sub>=30 s. If this condition is met, the potential on/off ramp maneuver has not ended after a relatively long time, so the maneuver is discarded by setting the maneuver identifier value M<sub>id </sub>to zero at box 254 and setting Start_flag to zero and End_flag to one at box 256.
If the condition of the block 252 is not met, the algorithm determines whether the maneuver has been identified as an offramp maneuver by determining whether the maneuver identifier value M<sub>id </sub>is two at decision diamond 258. If the maneuver identifier value M<sub>id </sub>is one or zero, the onramp maneuver ends when the increase in the vehicle speed becomes smaller. Therefore, if the maneuver identifier value M<sub>id </sub>is not two at the decision diamond 258, the algorithm determines whether the speed condition v<sub>x</sub>(t)−v<sub>x</sub>(t−αT)≦v<sub>med </sub>is met at decision diamond 260, where αT is 10 s and V<sub>med </sub>is 5 mph in one nonlimiting example. If this condition is not met, meaning the onramp maneuver has not ended, then the algorithm returns to the block 238.
If the condition of the decision diamond 260 has been met, the algorithm determines whether the speed conditions v<sub>x</sub>(t−T)≧V<sub>large </sub>and v<sub>x</sub>(t−T)−v<sub>x</sub>(t<sub>start</sub>)≧v<sub>th </sub>have been met at decision diamond 262. In one nonlimiting embodiment, V<sub>large </sub>is 55 mph and v<sub>th </sub>is 20 mph. If both of the conditions of the decision diamond 262 have been met, then the maneuver is truly an onramp maneuver. The algorithm sets the maneuver identifier value M<sub>id </sub>to one identifying an onramp maneuver and sets time t<sub>end</sub>=t−T at box 264, and Start_flag to zero and End_flag to one at the box 256 and returns at the block 238. If the condition of the decision diamond 262 has not been met, the maneuver is not an onramp maneuver, so the maneuver is discarded by setting the maneuver identifier value M<sub>id </sub>to zero at the box 254, and Start_flag to zero and End_flag to one at the box 256, and returning at the block 238.
If the maneuver identifier value M<sub>id </sub>is two at the decision diamond 258, the offramp maneuver ends if the vehicle speed v is very small. Therefore, the algorithm determines whether the speed condition v<sub>x</sub>(t−T:t)≦v<sub>small </sub>is met at decision diamond 266, where v<sub>small </sub>is 3 mph in one nonlimiting example. If this condition of the decision diamond 266 has been met, meaning that the offramp maneuver has ended, then the algorithm sets time t<sub>end</sub>=t−T at box 268, Start_flag to zero and End_flag to one at box 256, and returns at the block 238.
If the condition of the decision diamond 266 has not been met, the algorithm determines whether the speed has not gone down enough to indicate that the maneuver is not an offramp maneuver by determining whether the speed condition v<sub>x</sub>(t)>v<sub>x</sub>(t<sub>end</sub><sub><sub2>—</sub2></sub><sub>curve</sub>)+10 mph has been met at decision diamond 270. If this condition is met, meaning that the speed is too high for the maneuver to be an offramp maneuver, the maneuver identifier value M<sub>id </sub>is set to zero at box 272, and Start_flag is set to zero and End_flag is set to one at the box 256, and the algorithm returns at the block 238. If the condition of the decision diamond 270 has not been met, meaning that the potential offramp maneuver has not been completed, then the algorithm returns at the block 238.
As the maneuver identifier processor 46 determines the beginning and the end of a maneuver, the data selection processor 48 stores the corresponding data segment based on the variables Start_flag, End_flag, t<sub>start </sub>and t<sub>end</sub>.
Highway on/offramp maneuvers involve both curvehandling and a relatively large speed increase/decrease. In general, the more skillful a driver is, the larger the lateral acceleration and the yaw rate are on the curves. Similarly, the more skillful a driver is, the faster the speed increases at an onramp. However, at an offramp, a less skilled driver may decelerate fast at the beginning to have a lower speed while a more skilled driver may postpone the deceleration to enjoy a higher speed at the offramp and then decelerate fast at the end of the offramp. In addition, a more skilled driver may even engage throttle at an offramp to maintain the desired vehicle speed. Thus, the steering angle, yaw rate and the lateral acceleration can be used to assess skillfulness of the curvehandling behavior at an on/offramp, and vehicle speed, longitudinal acceleration, throttle opening and brake pedal force/position can be used to assess the driver's longitudinal control.
However, the data collected consists of the time trace of the signals, which usually results in a fair amount of data. For example, a typical on/offramp maneuver lasts more than 20 seconds. Therefore, with a 10 Hz sampling rate, more than 200 samples of each signal would be recorded. Thus, data reduction is necessary in order to keep the classification efficient. Further, the complete time trace of the signals is usually not affective for the classification. In fact, a critical design issue in classification problems is to extract discriminate features, which best represent individual classes. As a result, the skill characterization processor 52 may include a feature processor and a skill classifier, as discussed above.
As discussed above, the feature processor involves three processing steps, namely, original feature derivation, feature extraction and feature selection. The original features are usually derived using various techniques, such as timeseries analysis and frequencydomain analysis, which are well understood to those skilled in the art. The present invention proposes a nonlimiting technique to derive the original features based on engineering insights.
For onramp maneuvers, the original features include the maximum lateral acceleration, the maximum yaw rate, the average acceleration, the maximum throttle opening and an array of throttle indexes TI<sub>throttle</sub>=[TI<sub>1 </sub>. . . TI<sub>i </sub>. . . TI<sub>N</sub>] based on the distribution of the throttle opening. Each throttle index TI<sub>i </sub>is defined as the percentage at the time when the throttle opening α is greater than a threshold α<sub>thi</sub>. That is, if the onramp maneuver takes T<sub>total </sub>seconds and during that time period the throttle opening is greater than α<sub>thi </sub>(0<α<sub>thi</sub><100%) for T<sub>i </sub>seconds, then the throttle index TI<sub>i</sub>=T<sub>i</sub>/T<sub>total</sub>. Examples of the thresholds [α<sub>th1 </sub>. . . α<sub>thi </sub>. . . α<sub>thN</sub>] can include [20% 30% 40% 50% 60%] or from 10% to 90% with a 10% interval in between. Alternatively, T<sub>total </sub>can be defined as the time when α>α<sub>th</sub>, where α<sub>th </sub>should be smaller than a<sub>thi </sub>or i=1, 2, . . . , N.
For offramp maneuvers, the original features include the maximum lateral acceleration, the maximum yaw rate, the average deceleration, the maximum braking pedal position/force and an array of braking indexes BI<sub>braking</sub>=[BI<sub>1 </sub>. . . BI<sub>i </sub>. . . BI<sub>N</sub>] based on the distribution of the brake pedal position/force. Similar to the throttle index TI<sub>i</sub>, the braking index BI<sub>i </sub>is defined as the percentage of the time when the braking pedal position/force b is greater than a threshold b<sub>thi</sub>.
For each recognized on/offramp maneuver, one set of the original features is derived. This set of original features can be represented as an original feature vector x, an ndimension vector with each dimension representing one specific feature. This original feature vector serves as the input for further feature extraction and feature selection processing. Feature extraction tries to create new features based on transformations or combination of the original features (discriminants), while feature selection selects the best subset of the new features derived through feature extraction.
Various feature extraction methods can be used, such as principle component analysis (PCA), linear discriminant analysis (LDA), kernel PCA, generalized discriminant Analysis (GDA), etc. In one nonlimiting embodiment, LDA is used, which is a linear transformation where y=U<sup>T</sup>x and where U is an nbyn matrix and y is an nby1 vector with each row representing the value of the new feature. The matrix U is determined offline during the design phase. Because the original features for highway onramp and offramp maneuvers are different, the feature extraction would also be different. That is, the matrix U for onramp maneuvers would be different from the matrix U for offramp maneuvers.
To further reduce the feature dimension for improved classification efficiency and effectiveness, feature selection techniques, such as exhaustive search, can be used. The subset that yields the best performance is chosen as the final features to be used for classification. For example, the resulting subset may consist of m features corresponding to the {i<sub>1 </sub>i<sub>2 </sub>. . . i<sub>m</sub>}(1≦i<sub>1</sub>≦i<sub>2</sub>≦ . . . ≦i<sub>m</sub>≦n) row of the feature vector y. By writing the matrix U as u=[u<sub>1 </sub>u<sub>2 </sub>. . . u<sub>n</sub>] with each vector being an nby1 vector, and then selecting only the vectors corresponding to the best subset, yields W=[u<sub>i1 </sub>u<sub>i2 </sub>. . . u<sub>im</sub>], an MbyN matrix. Combining the feature extraction and feature selection, the final features corresponding to the original feature vector x can be derived as =W<sup>T</sup>x. Once again, the matrix W for onramp maneuvers would be different from that for offramp maneuvers.
The skill characterization processor 52 then classifies the driver's driving skill based on the discriminant feature vector z. Classification techniques, such as fuzzy logic, clustering, neural networks (NN), support vector machines (SVM), and simple thresholdbased logic can be used for skill classification. In one embodiment, an SVMbased classifier is used. A Kclass SVM consists of K hyperplanes: f<sub>k</sub>(z)=w<sub>k</sub>z+b<sub>k</sub>,k=1,2, . . . ,k where w<sub>k </sub>and b<sub>k </sub>are determined during the design phase based on the test data. The class label c for any testing data is the class whose decision function yields the largest output as:
<maths id="MATHUS00018" num="00018"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>c</mi><mo>=</mo><mrow><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><munder><mi>max</mi><mi>k</mi></munder><mo></mo><mrow><mi>fx</mi><mo></mo><mrow><mo>(</mo><mi>z</mi><mo>)</mo></mrow></mrow></mrow></mrow><mo>=</mo><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><munder><mi>max</mi><mi>k</mi></munder><mo></mo><mrow><mo>(</mo><mrow><mrow><msub><mi>w</mi><mi>k</mi></msub><mo></mo><mi>z</mi></mrow><mo>+</mo><msub><mi>b</mi><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow><mo>,</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mo>,</mo><mn>2</mn><mo>,</mo><mi>…</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo>,</mo><mi>K</mi></mrow></mtd><mtd><mrow><mo>(</mo><mn>52</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
The SVM parameters for onramp maneuvers are different from those for offramp maneuvers.
The feature extraction, feature selection and the Kclass SVM are designed offline based on vehicle test data. A number of drivers were asked to drive several instrumented vehicles under various traffic conditions and the sensor measurements were collected for the classification design. Highway on/offramp maneuvers are recognized using the maneuver identification algorithm discussed above. For every on/offramp maneuver, an original feature vector X can be constructed. The feature vector corresponding to all the onramp maneuvers are put together to form a training matrix X<sub>on</sub>=[x<sub>1on </sub>x<sub>2on </sub>. . . x<sub>Lon</sub>], where L<sub>on </sub>is the total number of onramp maneuvers. Each row of the matrix X<sub>on </sub>represents the values of one feature variable while each column represents the feature vector of a training pattern. Similarly, the feature vectors corresponding to all of the offramp maneuvers form the training matrix X<sub>off</sub>=[x<sub>1off </sub>x<sub>2off </sub>. . . x<sub>Loff</sub>]. The training matrix X<sub>on </sub>is used for the design of the skill classification based on onramp maneuvers while the training matrix X<sub>off </sub>is for the design based on the offramp maneuvers. Because the design process is the same for both maneuvers, X=[x<sub>1 </sub>x<sub>2 </sub>. . . x<sub>L</sub>] is used to represent the training matrix.
For the design of the LDAbased feature extraction, the goal is to train the linear data projection Y=U<sup>T</sup>X such that the ratio of the betweenclass variance to the withinclass variance is maximized, where X is an NbyL training matrix, i.e., X<sub>on </sub>for the onramp maneuver and X<sub>off </sub>for the offramp maneuvers, and the transform matrix U is the result of the training. Commercial or opensource algorithms that compute the matrix U are available and wellknown to those skilled in the art. The inputs to those algorithms include the training matrix X and the corresponding class labels. In one embodiment, the class labels can be 15 with 1 indicating a lowskill driver, 3 indicating a typical driver and 5 being a highskill driver. In addition, a class label 0 can be added to represent those hardtodecide patterns. The class labels are determined based on expert opinions by observing the test data. The outputs of the LDA algorithms include the matrix U and the new feature matrix Y.
The feature selection is conducted on the feature matrix Y. In one embodiment, an exhaustive search is used to evaluate the classification performance of each possible combination of the extracted features. The new features still consist of n features, and there are Σ<sub>i1</sub><sup>n</sup>C<sup>i</sup><sub>n </sub>possible combinations of the n features. The exhaustive search evaluates the classification performance of each possible combination by designing an SVM based on the combination and deriving the corresponding classification error. The combination that yields the smallest classification error is regarded as the best combination where the corresponding features {i<sub>1 </sub>i<sub>2 </sub>. . . m} determine the matrix [u<sub>i1 </sub>u<sub>i2 </sub>. . . U<sub>im</sub>]. Conveniently, the SVM corresponding to the best feature combination is the SVM classifier. Since commercial or opensource algorithms for SVM designs are wellknown to those skilled in the art, a detailed discussion is not necessary herein.
It is noted that although SVM is used as the classification technique, the present invention can easily employ other techniques, such as fuzzy logic, clustering or simple thresholdbased logics. Similarly, other feature extraction and feature selection techniques can be easily employed in lieu of the LDA and exhaustive search.
According to another embodiment, the skill characterization is based on driver backup maneuvers where the differentiation of driving skill from one level to another employs measured vehicle data and analyzed time factor and steering gain factor of the driver while he is backing up the vehicle Backup maneuvers can be identified based on transmission gear position, steering activity, vehicle yaw motion, the change in vehicle heading direction, lateral and longitudinal accelerations, and speed control coordination.
FIG. 42 is a flow chart diagram 510 showing a process for identifying a vehicle backup maneuver, according to an embodiment of the present invention. To keep the integrity of the data associated with an identified maneuver, the system keeps recording and refreshing at a certain period, such as T=2 s, of data.
The maneuver identifying algorithm begins by reading the filtered vehicle speed signal V<sub>x </sub>and the vehicle longitudinal acceleration signal α<sub>a </sub>from a longitudinal accelerometer or by differentiating vehicle speed measurements at box 512. The maneuver identifying algorithm then proceeds according to its operational states denoted by the Boolean variable Start_flag and End_flag, where Start_flag is initialized to zero and End_flag is initialized to one. The algorithm then determines whether Start_flag is zero at block 514 to determine whether the vehicle is in a backup maneuver. If Start_flag is zero, then the vehicle 10 is not in a vehicle backup maneuver.
The algorithm then determines if the vehicle has started a vehicle backup maneuver by determining whether the conditions of decision diamond 516 have been met, namely, whether the transmission gear is in reverse and the vehicle speed v<sub>x </sub>is greater than a threshold v<sub>th</sub>. In one nonlimiting embodiment, t<sub>1 </sub>is a time window of about 1 s, Δt is the sampling time of the speed measurements, and Vth is a predetermined thresholds, such as v<sub>th</sub>=2 m/s. If all of the conditions of the decision diamond 516 have been met, then the vehicle 10 has started backing up, so the algorithm sets Start_flag to one and End_flag to zero at box 518. The algorithm then determines a starting time t<sub>start </sub>at box 520, and proceeds to collect further data at box 528, and the process goes to the box 528 for collecting data.
If the Start_flag is not zero at the block 514 where the vehicle 10 has been identified to be in a vehicle backup maneuver, the algorithm determines whether the vehicle backup maneuver has been completed by determining whether the vehicle speed v<sub>x </sub>is less than the threshold v<sub>th </sub>over a sample period at the decision diamond 522. If this condition is met at the decision diamond 522, then the vehicle backup maneuver has been completed, and the algorithm sets Start_flag equal to zero and End_flag equal to one at box 524, and sets the time t<sub>end</sub>=t−t<sub>1 </sub>at box 526. If the condition of the decision diamond 522 has not been met, the vehicle 10 is still in the vehicle backup maneuver, so the algorithm proceeds to the block 528 to collect more data. As the maneuver algorithm determines the beginning and the end of the vehicle backup maneuver, the data selection processor 48 stores a corresponding data segment based on Start_flag, End_flag, t<sub>start </sub>and t<sub>end</sub>.
FIG. 43 is a flow chart diagram 530 showing a process used by the data selection processor 48 for storing the data corresponding to a particular vehicle backup maneuver. The flow chart diagram 530 is similar to the flow chart diagram 130 discussed above, where like steps are identified by the same reference numeral. In this embodiment for the vehicle backup maneuver, if the End_flag is one at the block 142 because the vehicle backup maneuver has been completed, and the variable old_Start_flag is set to zero at the box 144, the algorithm determines whether the backup maneuver was a straightline backup maneuver or a backup maneuver accompanied by a relatively sharp turn at decision diamond 532. In one embodiment, the algorithm determines if the backup maneuver is also a left or right turn based on the yaw rate signal ω and its integration φ=∫<sub>t</sub><sub><sub2>start</sub2></sub><sup>t</sup><sup><sub2>end</sub2></sup>ω(t)dt. If max(ω(t<sub>start</sub>:t<sub>end</sub>))<ω<sub>th </sub>or φ<φ<sub>th</sub>, where φ<sub>th </sub>is a predetermined threshold, such as 60°, the maneuver is regarded as a straightline backup maneuver, and the maneuver identifier value M<sub>id </sub>is set to one at box 534. If these conditions have not been met at the decision diamond 532, the vehicle 10 is traveling around a relatively sharp turn during the backup maneuver, where the maneuver identifier value M<sub>id </sub>is set to two at box 536. The algorithm then outputs the recorded data at box 538 including the maneuver identifier value M<sub>id</sub>, M<sub>seq</sub>=M<sub>seq+1 </sub>and data_ready=1. The algorithm ends at box 540.
A skillful driver usually exhibits a larger speed variation and deceleration/acceleration as well as the smoothness of vehicle control. The smoothness of the steering control can be reflected in the damping characteristics (e.g., overshoots and oscillations), the highfrequency components, and the number and magnitude of corrections in the driver's steering input. Many timedomain and frequencydomain analysis techniques can be used to assess the smoothness of the steering control. The invention gives an example to assess the steering smoothness by constructing a steering command and comparing the driver's steering input with the steering command. As mentioned before, the road geometry can be derived using a backwardlooking camera or DGPS with EDMap. Given the derived road geometry and the speed of the vehicle, a steering command can be generated by a driver model or a steering control algorithm. Various driver models or steering control algorithms, such as those for vehicle lanekeeping control, are available and wellknown to those skilled in the art. With both the driver's steering input and the generated steering command, the error between them can be calculated. Since this error is likely to be larger for a larger steering command, the error is further divided by the maximum value of the steering command for normalization. Various indexes can be calculated based on the normalized error to assess the steering smoothness. These indexes may include the mean of the absolute value of the normalized error, the maximum absolute value of the normalized error, the number of zero crossing, and the magnitude of the higherfrequency components of the normalized error. Moreover, the local peaks (local maximum) of the normalized error can be detected and the mean of the absolute value of those peaks can be computed. Similar indexes can also be calculated based on the steering rate and/or the error between the steering rate and the rate of the steering command. All these indexes can then be includes as part of the original features.
Various indexes can be calculated based on the nonnormalized steering characteristics to assess the steering smoothness. These indexes may include the number of zero crossings, and the magnitudes of the low and high frequency components of the steering measurement. Similar indexes can also be calculated based on the steering rate. All these indexes can then be included as part of the original features.
Some feature examples include:
 1. the maximum value of the yaw rate: max (t<sub>1start</sub>:t<sub>1end</sub>);
 2. the maximum value of the lateral acceleration max (α<sub>y</sub>t<sub>1start</sub>:t<sub>1end</sub>);
 3. the maximum speed max (v<sub>x</sub>(t<sub>1start</sub>:t<sub>1end</sub>));
 4. the average speed mean (v<sub>x</sub>(t<sub>1start</sub>:t<sub>1end</sub>));
 5. the maximum speed variation max (v<sub>x</sub>(t<sub>1start</sub>:t<sub>1end</sub>))−min (v<sub>x</sub>(t<sub>1start</sub>:t<sub>1end</sub>));
 6. the maximum braking pedal force/position (or the maximum deceleration;
 7. the maximum throttle percentage (or the maximum acceleration);
 8. the magnitude of variance (for steering angle, yaw rate, lateral acceleration, etc.);
 9. the number of zero crossing above a threshold;
 10. the minimum distance (or headway time) to the object in the back (e.g., from a forwardlooking radar/lidar or camera, or from GPS together with V2V communications); and
 11. the maximum range rate to the object in the back if available (e.g., from a forwardlooking radar/lidar or camera, or from GPS together with V2V communications).
A neural network based classifier 550 suitable for this purpose is shown in FIG. 44. The neural network classifier 550 includes an input layer 552 having seven input neurons 554 corresponding to the seven discriminates, namely, vehicle final speed, average accelerate and a fivedimension throttle index array. The neural network classifier 550 also includes a hidden layer 556 including neurons 558, and an output layer 562 including three neurons 564, one for a lowskill driver, one for a typical driver and one for a highskill driver, where branches 560 connect the neurons 554 and 558. Alternatively, the output layer 562 of the neural network classifier 550 may have five neurons, each corresponding to one of the five levels ranging from lowskill to highskill. The design and training of a neural network classifier 550 is based on vehicle test data with a number of drivers driving under various traffic and road conditions.
In another embodiment, the skill characterization is based specifically on vehicle curvehandling maneuvers, which refer to the maneuvers where a vehicle is on curve using the various processes discussed herein. Curvehandling maneuvers can be identified based on the driver's steering activity, vehicle yaw motion, and the change in vehicle heading direction.
Reliable indicators of curvehandling maneuvers include a relatively large vehicle yaw rate and/or a relatively large steering angle. Although a relative large yawrate (or steering angle) can also be associated with other maneuvers, such as some lane changes, additional algorithms to distinguish curvehandling maneuvers are not necessary since the characterization algorithm is also effective with those other maneuvers. In this embodiment, the yawrate is used to describe the operation of the data selector, and a steeringanglebased data selector should work in a similar way.
During a curvehandling maneuver, the lateral deviation away from the center of the curve, the smoothness of the steering control and the smoothness of the speed control can be used to determine the driving skill. A highskilled driver typically maintains a small lateral deviation or deviates toward the inner side of the curve (so that a higher speed can be achieved given the same amount of later acceleration on the same curve). As a result, the farther the vehicle deviates toward the outer side of the curve, the lower the driver's driving skill. The lateral deviation, as well as the road geometry, can be derived based on images from a forwardlooking camera of DGPS with EDMap. The relevant signal processing is wellknown to those skilled in the art, therefore, it is not included herein. If the lateral deviation is toward the outer side of the curve, its magnitude (e.g., the maximum lateral deviation), together with the corresponding curvature, can be used as a discriminative feature for the skill classification. In addition, the maximum lateral acceleration, the maximum yaw rate, and the speed corresponding to the maximum acceleration can also be included as the original features.
The smoothness of the steering control can be reflected in the damping characteristics (e.g., overshoots and oscillations), the highfrequency components, and the number and magnitude of corrections in the driver's steering input. Many timedomain and frequencydomain analysis techniques can be used to assess the smoothness of the steering control. This invention gives an example to assess the steering smoothness by constructing a steering command and comparing the driver's steering input with the steering command. As mentioned before, the road geometry can be derived using a forwardlooking camera or DGPS with EDMap given the derived road geometry and the speed of the vehicle, a steering command can be generated by a driver model or a steering control algorithm. Various driver models or steering control algorithms, such as those for vehicle lanekeeping control, are available and wellknown to those skilled in the art. With both the driver's steering input and the generated steering command, the error between them can be calculated. Since this error is likely to be larger for a larger steering command, the error is further divided by the maximum value of the steering command for normalization. Various indexes can be calculated based on the normalized error to assess the steering smoothness. These indexes may include the mean of the absolute value of the normalized error, the maximum absolute value of the normalized error, the number of zero crossings, and the magnitude of the higherfrequency components of the normalized error. Moreover, the local peaks (local maximum) of the normalized error can be detected and the mean of the absolute value of those peaks can be computed. Similar indexes can also be calculated based on the steering rate and/or the error between the steering rate and the rate of the steering command. All these indexes can then be included as part of the original features.
In addition, vehicle yawrate and the lateral jerk calculated from the lateral acceleration can also be incorporated. For example, the original features may further include the maximum lateral jerk and the correlation between the steering input and the yaw rate. In summary, an exemplary set of the original features may include, but not necessarily limited to, the following features:
1. the maximum lateral deviation toward the outer side of the curve;
2. the maximum lateral acceleration;
3. the maximum yaw rate;
4. the speed corresponding to the maximum acceleration;
5. the mean of the absolute value of the normalized error;
6. the maximum absolute value of the normalized error;
7. the number of zero crossings;
8. the magnitude of the higherfrequency components of the normalized error;
9. the mean of the absolute value of the local peaks of the normalized error;
10. the maximum lateral jerk; and
11. the correlation between the steering input and the yaw rate.
Alternatively, the original features can be broken down into two sets (e.g., one set including features 1 to 4 and the other including features 5 to 11), and two classifiers can be designed separately, one for each of the two feature sets. The classification results are then combined to determine the skill level revealed by the corresponding curvehandling maneuver.
To evaluate these original features and to derive more effective features, feature extraction and feature selection techniques are employed. Various feature extraction methods can be used, such as principle component analysis (PCA), linear discriminant analysis (LDA), kernel PCA, generalized discriminant analysis (GDA) and so on.
This invention uses PCA as an example. The PCA is an unsupervised linear transformation: y=U<sup>T</sup>x, where U is an nbyn matrix, x is an nby1 vector consisting of the values of the original features, and x is an nby1 vector with each row representing the value of the new features (i.e., transformed features). The matrix U is determined offline during the design phase, which will be described later.
To further reduce the feature dimension for improved classification efficiency and effectiveness, various feature selection techniques, such as exhaustive search, branchandbound search, sequential forward/backward selection, and sequential forward/backward floating search, can be used. Alternatively, a simple feature selection can be performed by selecting the first m features in the y vector since the PCA automatically arrange features in order of their effectiveness in distinguishing y=U<sup>T</sup>x one class from another. Writing the matrix U as U=[u<sub>1 </sub>u<sub>2 </sub>. . . u<sub>n</sub>], with each vector an nby1 vector, and then selecting only the {1 2 . . . m} rows of the feature vector, we have W=[u<sub>1 </sub>u<sub>2 </sub>. . . u<sub>m</sub>], an MbyN matrix. Combining the feature extraction and feature selection, the final features corresponding to the original feature vector x can be derived as z=W<sup>Tx</sup>.
The skill classifier then classifies a driver's driving skill based on the discriminant feature vector z. Classification techniques, such as fuzzy logic, clustering, neural network (NN), support vector machine (SVM), and even simple thresholdbased logics, are wellknown, and any of them can be used for skill classification. This invention chooses to design a NNbased classifier as an example. The net has an input layer with m input neurons (corresponding to the m discriminative feature in vector z=W<sup>T</sup>x), a hidden layer, and an output layer with k neurons corresponding to the number of skill levels. For example, the driving skill may be divided into five level ranging from 1 to 5, with 1 indicating low skill, 3 normal skill, and 5 excellent skill. In addition, an extra neuron can be added to the output layer to represent “hardtodecide” patterns. The output of each of the output neurons representing the likelihood the driving skill belongs to the corresponding skill level.
The design and training of the neural network is based on vehicle test data with a number of drivers driving under various traffic and road conditions. Curvehandling maneuvers are recognized using the maneuver identification algorithm described earlier. For every curvehandling maneuver, an original feature vector x can be constructed. The features vectors corresponding to all the curvehandling maneuvers are put together to form a matrix X =[x<sub>1 </sub>x<sub>2 </sub>. . . x<sub>L</sub>], where L is the total number of the curvehandling maneuvers. Each row of the matrix X represents the values of one feature variable while each column represents the feature vector of a pattern (i.e., a curvehandling maneuver). Correspondingly, a skilllevel label is generated for each pattern based on expert opinions by observing the test data. The matrix X is further separated into two matrices, one for the design/training of the classifiers (including the features extraction and selection) and the other for the performance evaluation. Since commercial or opensource algorithms for PCAbased feature extraction/selection and NN design are wellknown to those skilled in the art, this invention does not go into the computation details involved in the design.
During a curvehandling maneuver, the lateral deviation away from the center of the curve, the smoothness of the steering control and the smoothness of the speed control can be used to determine the driving skill. A highskilled driver typically maintains a small lateral deviation or deviates toward the inner side of the curve (so that a higher speed can be achieved given the same amount of lateral acceleration on the same curve.) Similarly, a highskilled driver typically has a smoother steering control, which can be reflected in the damping characteristics (e.g., overshoots ad oscillations), the highfrequency components, and the number and magnitude of correction in the driver's steering input. If the different levels of driving skill are treated as different classes, pattern recognition techniques can be employed to determine the driving skill level based on discriminative features, such as the maximum lateral deviation toward the outer side of the curve, the error between the driver's steering input and that generated by a steering control algorithm, the maximum lateral jerk.
According to another embodiment of the present invention, the driving skill is based on multiple types of maneuvers. In this embodiment, a method for effective differentiation of driver skill from one level to the other is provided through introduction of steering gain factor of the driver.
FIG. 45 is a block diagram of a skill level determination system 1020 applicable to all types of vehicle maneuvers. Invehicle measurements are first processed to generate original features. For example, during curvehandling maneuvers, signals such as the driver's steering input, vehicle speed, yawrate, lateral acceleration, throttle opening, longitudinal acceleration, are recorded. The corresponding measurements are processed to derive the original features at box 1022, such as the maximum lateral deviation toward the outer side of the curve, the error between the driver's steering input and that generated by a steering control algorithm, the maximum lateral jerk, etc. These original features are further processed at box 1024 through feature extraction to generate transformed features, which have a better capability in differentiating different patterns, i.e., different driving skill level in this invention. To further reduce the dimension of the features, feature selection is used at box 1026 to select the optimal subset of features out of the transformed features. The selected features are the final features input to a classifier 1028 for classification. The classifier can output the skill level, or assigns a rank to each skill level indicating the belief or probability that the given input pattern (represented by the final features) belongs to that skill level.
FIG. 46 is a block diagram of skill characterization system 1030 that uses the same signals/measurements, but employs different classifiers and/or feature processing. The skill system 1020 involves four components, namely, original feature generation, feature extraction, feature selection and classification. Multiple modules 1032 of skill classification are employed in the system 1030. The modules 1032 may only differ in the classifiers they employ, or they may also generate their own individual original features, transformed features and final features. The classification results from these modules 1032 are combined through a classifier combination module 1034. For example, the classifier combination module 1034 may generate a number for each skill level based on the output of the skill classification modules 1032. For example, if n out of the N skill classification modules 1032 output the skill level i (or assign the highest rank to the skill level i or output the highest numerical value for the skill level i), the classifier combination module 1034 generates V(i)=n/N. For skill levels from 1 to K, the classifier combination module 1034 calculates s=arg max <sub>i=1</sub><sup>K</sup>V(i). If V(s)≧V<sub>th</sub>, where 0<v<sub>th</sub>≦1 is a predetermined threshold, the classifier combination module 1034 outputs s as the skill level. Otherwise, the classifier combination module 1034 can simply outputs 0 to indicate that the skill classification modules 1032 cannot reach a definite conclusion. Alternatively, the classifier combination module 1034 may output a vector [V(1) V(2) . . . V(K)], regardless of the value of V(s). That output vector can be used to approximate the confidence or probability that the input pattern belongs to each skill level.
FIG. 47 shows a classification system 1040 using an alternative classifier combination scheme 1040 employing only two skill classification modules 1042 and 1044 as a nonlimiting example. To improve the efficiency and reduce computation, the classifier combination is conducted only if the first skill classification module 1042 cannot determine the skill level with sufficient confidence. In this implementation, the skill classification modules 1042 and 1044 output a confidence C(i) (or probability) for each skill level i to a decision diamond 1046. If the highest confidence C(s)=arg max <sub>i=1</sub><sup>K</sup>C(i) is larger than a given threshold C<sub>th</sub>, the classifier combination module 1046 directly outputs s as the skill level and the second skill classification module 1044 will not be invoked to classify the skill level. If C(s)<C<sub>th</sub>, then the second skill classification module 1044 is employed to classify the skill level, and the result of those two skill classification modules 1042 and 1044 are combined to determine the skill level. The skill level is combined by classifier 1048. The extension of this sequential combination scheme to the case with N skill classification modules should be obvious to those skilled in the art.
FIG. 46 and FIG. 47 illustrate the combination of multiple skill classification modules that use the same signals/measurements, such as the signals recorded during the same curvehandling maneuvers. FIG. 48 illustrates an integrated skill characterization system 1050 showing the combination of multiple skill characterization modules 1052 based on different signals/measurements. A maneuver type and signal measurements are selected at box 1054. Each skill characterization module 1052 may consist of a single skill classification module as shown in the system 1020 or multiple skill classification modules together with classifier combination module as in the systems 1030 and 1040. For example, one skill characterization module may use the signals, such as vehicle speed, yawrate, longitudinal and lateral acceleration, during curvehandling maneuvers, where another skill characterization module is updated when it receives a new set of signals. For example, after the vehicle exits a curve, a new set of signals are available to the skill characterization module corresponding to curvehandling maneuvers. The new set of signals are then used by that specific skill characterization module to generate a new classification of skill level, as a result, the output of that specific skill characterization module is updated while all other skill characterization maintains their existing results. A decision fusion module 1056 then combines the new results with the existing results and updates its final decision, i.e, the skill level, in similar fashion as the classifier combination modules in FIGS. 46 and 47.
According to another embodiment, the skill classification or characterization is based on integrated driving skill recognition. More specifically, the driving skill characterization is regarded as a pattern recognition problem. The invehicle measurements are first processed to generate original features. These original features provide a mathematical representation of the patterns that need to be classified according to their associated driving skill level. Moreover, by processing the continuous measurements of various signals to derive these original features, the dimension of the data is greatly reduced. These original features are further processed through feature extraction to generate transformed features, which have a better capability in differentiating patterns according to their associated driving skill levels. To further reduce the dimension of the features, feature selection techniques are then used to select the optimal subset of features from the transformed features. The selected features are the final features that are input to the classifier for classification. The classifier then outputs the skill level, or assigns a rank to each skill level with the highest rank being the first choice, or outputs a numerical value for each skill level indicating the belief or probability that the given input pattern value for each skill level indicating the belief or probability that the given input pattern value for each skill level indicating the belief or probability that the given input pattern (represented by the final features) belongs to that skill level. A detailed description of skill classification using invehicle measurements collected during curve handling maneuvers, together with the details in recognizing curvehandling maneuvers and collecting the invehicle measurements accordingly, is discussed above.
According to another embodiment of the invention, the decision fusion in the decision fusion processor 56 can be divided into three levels, namely a level1 combination, a level2 combination and a level3 combination. The level1 combination combines the classification results from different classifiers that classify different maneuvers based on a single maneuver, and is not necessary for maneuvers that have only one corresponding classifier. The level2 combination combines the classification results based on multiple maneuvers that are of the same type. For example, combining the classification results of the most recent curvehandling maneuver with those of previous curvehandling maneuvers. The level3 combination combines the classification results based on different types of maneuvers, particularly, combines the results from the individual level2 combiners. The level2 combination and the level3 combination can be integrated into a single step, or can be separate steps. The level1 combination resides in the skill characterization processor 52 and the level2 combination and the level3 combination are provided in the decision fusion processor 56.
FIG. 49 is a block diagram of a skill characterization processor 430 that can be used as the skill characterization processor 52, and includes the level1 combination. The information from the maneuver identification processor 46, the data selection processor 48 and the traffic/road condition recognition processor 50 are provided to a plurality of channels 432 in the processor 430, where each channel 432 is an independent classification for the same specific maneuver. In each channel 432, original features of the maneuver are identified in an original features processor 434, features are extracted in a features extraction processor 436, the features are selected in a feature selection processor 438 and the selected features are classified in a classier 440. A level1 combination processor 442 combines all of the skills for different maneuvers and outputs a single skill classification. For example, assume two classification channels are designed for the curvehandling maneuvers. Once a new curvehandling maneuver is identified and the data associated with this specific maneuver is collected, the data is input to both channels at the same time and each channel outputs a skill classification result. The levelone combination then combines the two results and outputs a single skill classification.
The level1 combination is a standard classifier combination problem that can be solved by various classifier combination techniques, such as voting, sum, mean, median, product, max/min, fuzzy integral, DempsterShafter, mixture of local experts (MLE), neural networks, etc. One criterion for selecting combination techniques is based on the output type of the classifiers 440. Typically, there are three type of classifier outputs, namely, confidence, rank and abstract. At the confidence level, the classifier outputs a numerical value for each class indicating their belief of probability that the given input pattern belongs to that class. At the rank level, the classifier assigns a rank to each class with the highest rank being the first choice. At the abstract level, the classifier only outputs the class label as a result. Combination techniques, such as fuzzy integral, MLEs and neural networks require outputs at the confidence level, while voting and associative switch only requires abstractlevel outputs. In one embodiment, the level1 combination of the invention is based on majority voting and DempsterShafter techniques.
Majority voting is one of the most popular decision fusion methods. It assumes all votes, i.e., classification results from different classifiers, are equally accurate. The majorityvoting based combiner calculates and compares the number of votes for each class and the class that has the largest number of votes becomes the combined decision. For example, assume the classes of the driving skill are labeled as i=1, 2, . . . ,k, with a larger number representing a more aggressive driving skill. In addition, a class “0” is added to represent the hardtodecide patterns. The number of votes V<sub>i </sub>for each class i=0,1, . . . ,k is:
<maths id="MATHUS00019" num="00019"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><msub><mi>V</mi><mi>i</mi></msub><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><msub><mi>v</mi><mi>ij</mi></msub></mrow></mrow><mo>,</mo><mrow><mrow><mi>with</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><msub><mi>v</mi><mi>ij</mi></msub></mrow><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>1</mn><mo>,</mo><mrow><mrow><mi>if</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><msub><mi>c</mi><mi>j</mi></msub></mrow><mo>=</mo><mi>i</mi></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mn>0</mn><mo>,</mo><mrow><mrow><mi>if</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><msub><mi>c</mi><mi>j</mi></msub></mrow><mo>≠</mo><mi>i</mi></mrow></mrow></mtd></mtr></mtable></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>53</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where c<sub>j </sub>is the output from classifier j and N is the total number of classifiers.
The combined decision is c=arg<sub>i=0,1, . . . </sub><sup>max</sup>V<sub>i</sub>. In addition, the combiner may also generate a confidential level based on the normalized votes,
<maths id="MATHUS00020" num="00020"><math overflow="scroll"><mrow><mrow><mrow><mi>conf</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>=</mo><mfrac><msub><mi>V</mi><mi>i</mi></msub><mrow><munderover><mo>∑</mo><mrow><mi>i</mi><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><msub><mi>V</mi><mi>i</mi></msub></mrow></mfrac></mrow><mo>,</mo></mrow></math></maths>
and provides a confidence vector [conf(0) conf(1) conf(K)]<sup>T</sup>.
Alternatively, weighted voting can be used to combine abstractlevel outputs as:
<maths id="MATHUS00021" num="00021"><math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mi>V</mi><mi>i</mi></msub><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><msub><mo>∝</mo><mi>ij</mi></msub><mo></mo><msub><mi>v</mi><mi>ij</mi></msub></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>54</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where the weightings α<sub>ij </sub>represent the correct rate of classifier j in classifying patterns belonging to class i. These weights can be predetermined based on the test performance (generalization performance) of the corresponding classifiers. Deriving the correct rate from the test performance is wellknown to those skilled in the art.
If the classifiers provide outputs at the confidence level, the DempsterShafter method can be used to design the combiner. The details of the DempsterShafter theory and algorithms are wellknown to those skilled in the art. Given the class labels as i=0,1, . . . ,k, each classifier outputs an Kby1 vector [b<sub>j</sub>(0) b<sub>j</sub>(1) . . . b<sub>j</sub>(K)]<sup>T</sup>, where b<sub>j</sub>(i) is the confidence (i.e., the belief classifier j has in that the input pattern belongs to class i. The confidence values should satisfy 0≦b<sub>j</sub>(i)≦1 and
<maths id="MATHUS00022" num="00022"><math overflow="scroll"><mrow><mrow><munderover><mo>∑</mo><mrow><mi>i</mi><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><msub><mi>b</mi><mi>j</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow><mo>=</mo><mn>1.</mn></mrow></math></maths>
Applying the DempsterShafter theory to the level1 combiner results in the following combination rule:
<maths id="MATHUS00023" num="00023"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mrow><mrow><mrow><mi>conf</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>=</mo><mfrac><mrow><mi>bel</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mrow><munderover><mo>∑</mo><mrow><mi>i</mi><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mrow><mi>bel</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow></mfrac></mrow><mo>,</mo><mi>with</mi></mrow><mo></mo><mstyle><mtext></mtext></mstyle><mo></mo><mrow><mrow><mi>bel</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mrow><msub><mi>b</mi><mi>j</mi></msub><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow><mo></mo><mrow><mo>(</mo><munder><mo>∏</mo><mrow><mrow><mi>m</mi><mo>=</mo><mn>1</mn></mrow><mo>,</mo><mrow><mi>…</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>N</mi></mrow><mo>,</mo><mrow><mi>m</mi><mo>≠</mo><mrow><mi>j</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mrow><msub><mi>b</mi><mi>m</mi></msub><mo></mo><mrow><mo>(</mo><mn>0</mn><mo>)</mo></mrow></mrow></mrow></mrow></mrow></munder><mo>)</mo></mrow></mrow></mrow></mrow></mrow><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle></mrow></mtd><mtd><mrow><mo>(</mo><mn>55</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
As a result, the combiner also outputs a Kby1 vector [conf(0) conf(1) . . . conf(k)]<sup>T</sup>, where conf(i) is the confidence in that the pattern belongs to class i. Similarly, conf(i) satisfy 0≦conf(i)≦1 and
<maths id="MATHUS00024" num="00024"><math overflow="scroll"><mrow><mrow><munderover><mo>∑</mo><mrow><mi>i</mi><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mi>conf</mi><mo></mo><mrow><mo>(</mo><mi>i</mi><mo>)</mo></mrow></mrow></mrow><mo>=</mo><mn>1.</mn></mrow></math></maths>
The output of the combiner is treated as the classification results based on a single maneuver, which is to be combined with results based on previous maneuvers of the same type in the level2 combination.
The results stored in the triplogger 54 can be used to enhance the accuracy and robustness of the characterization. To fulfill this task, the decision fusion processor 56 is incorporated. Whenever a new classification result is available, the decision fusion processor 56 integrates the new result with previous results in the triplogger 54 by the level2 and level3 combinations.
Different from the level1 combination, where the pattern, i.e., any single maneuver, to be classified by different classifiers is the same pattern, the level2 and the level3 combinations deal with the issue of combining classification results corresponding to different patterns, i.e., multiple maneuvers of the same or different types. Strictly speaking, the level1 combination is a standard classifier combination problem while the level2 and the level3 combinations are not. However, if a driver's driving skill is regarded as one pattern, the classification based on different maneuvers can be regarded as the classification of the same pattern with different classifiers using different features. Consequently, classifier combination techniques can still be applied. On the other hand, the different maneuvers can be treated as different observations at different time instances and the combination problem can be treated with data fusion techniques. To demonstrate how this works, the present invention shows one example for each of the two approaches, namely, a simple weightaverage based decision fusion that ignores the maneuver type and time differences, and a Bayesbased level2 and level3 combinations that take those differences into consideration.
FIG. 50 is a block diagram of a decision fusion processor 450 that can be the decision fusion processor 56 that receives the skill profile from the triplogger 54. The skill classification result for the most recent maneuver with M<sub>id</sub>=i is stored in the skill triplogger 54. Based on the maneuver identifier value M<sub>id</sub>, the skill profile triplogger 54 outputs all of the results of the maneuvers identified as M<sub>id=</sub>1 for the level2 combination and previous fused skill result from maneuvers of other types, where M<sub>id</sub>≠i. A switch 452 selects a particular level2 combination processor 545 depending on the type of the particular maneuver. An output processor 456 selects the level2 combination from the particular channel and outputs it to a level3 combination process or 458.
Since the Level2 combination combines the classification results based on maneuvers of the same type, each type of maneuver that is used for skill characterization should have its corresponding level2 combiner. From the perspective of data fusion, a level2 combination can be regarded as single sensor tracking, also known as filtering, which involves combining successive measurements or fusing of data from a single sensor over time as opposed to a sensor set. The level2 combination problem is to find the driving skill based on the classification results of a series of maneuvers that are of the same type:
, where represents the maneuver type and is the class label observed by the classifier (or the level1 combiner if multiple classifiers are used) based on the ti maneuver of the maneuver type
Based on Bayes theorem:
<FORM><img id="CUSTOMCHARACTER00004" he="7.53mm" wi="135.93mm" file="US20100209891A120100819P00999.TIF" imgcontent="character" imgformat="tif"/> (56)</FORM>
Where represents the probability of the event.
Further assuming that:
 1. The classification results are independent of each other, i.e., , and
 2. The driving skill obeys a Markov evolution, i.e.,Accordingly, can be simplified as:
<maths id="MATHUS00025" num="00025"><math overflow="scroll"><mtable><mtr><mtd><mtable><mtr><mtd><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>y</mi><mi>n</mi><mi>m</mi></msubsup></mrow><mo>,</mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mo>=</mo><mfrac><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>y</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo></mo><mrow><mo>(</mo><mrow><munderover><mo>∑</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow></mrow><mo>)</mo></mrow></mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>y</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mfrac></mrow></mtd></mtr></mtable></mtd><mtd><mrow><mo>(</mo><mn>57</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
In equation (56), P(y<sub>n</sub><sup>m</sup>x<sub>n</sub><sup>m</sup>) represents the probability of observing a class y<sub>n</sub><sup>m </sup>given the hypothesis that the maneuver is actually a class x<sub>n</sub><sup>m </sup>maneuver. Since P(x<sub>n</sub><sup>m</sup>=i) (with i=0,1, . . . K) is usually unknown, equal probability is usually assumed: P(x<sub>n</sub><sup>m</sup>=i)=1/(K+1). Consequently, P(y<sub>n</sub><sup>m</sup>x<sub>n</sub><sup>m</sup>)∝P(x<sub>n</sub><sup>m</sup>,y<sub>n</sub><sup>m</sup>)=P(y<sub>n</sub><sup>m</sup>=x<sub>n</sub><sup>m</sup>), where conf(x<sub>n</sub><sup>m</sup>) is the confidence level provided by the classifier (or the level1 combiner).
P(x<sub>n</sub><sup>m</sup>x<sub>—1</sub><sup>m</sup>) in equation (57) represents the probability of a class x<sub>n</sub><sup>m </sup>maneuver following a class x<sub>n1</sub><sup>m </sup>maneuver.
In an ideal driving environment, a driver's driving skill would be rather consistent as:
<maths id="MATHUS00026" num="00026"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mn>1</mn><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo>=</mo><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow></mtd></mtr><mtr><mtd><mrow><mn>0</mn><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo>≠</mo><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow></mtd></mtr></mtable></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>58</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
However, factors such as traffic/road conditions, fatigue, and inattention may cause a driver to deviate from his/her “normal” driving skill. Such factors can be incorporated into P(x<sub>n</sub><sup>m</sup>x<sub>n1</sub><sup>m</sup>) as:
<FORM>P(x<sub>n</sub><sup>m</sup>x<sub>n1</sub><sup>m</sup>)=f(x<sub>n</sub><sup>m</sup>,x<sub>n1</sub><sup>m</sup>, Traffic<sub>index</sub>(n), Road<sub>index</sub>(n) driver<sub>state</sub>(n)) (59)</FORM>
If traffic/road conditions have already been considered in the classification, P(x<sub>n</sub><sup>m</sup>x<sub>n1</sub><sup>m</sup>) can be simplified as:
<maths id="MATHUS00027" num="00027"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mo>{</mo><mtable><mtr><mtd><mrow><mrow><mn>1</mn><mo></mo><mi>ɛ</mi></mrow><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo>∈</mo><mrow><mo>[</mo><mrow><mrow><mi>max</mi><mo></mo><mrow><mo>(</mo><mrow><mn>0</mn><mo>,</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup><mo></mo><mi>β</mi></mrow></mrow><mo>)</mo></mrow></mrow><mo>,</mo><mrow><mi>min</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup><mo>+</mo><mi>β</mi></mrow><mo>,</mo><mi>K</mi></mrow><mo>)</mo></mrow></mrow></mrow><mo>]</mo></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mi>ɛ</mi><mo>,</mo></mrow></mtd><mtd><mi>if</mi></mtd><mtd><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo>∉</mo><mrow><mo>[</mo><mrow><mrow><mi>max</mi><mo></mo><mrow><mo>(</mo><mrow><mn>0</mn><mo>,</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup><mo></mo><mi>β</mi></mrow></mrow><mo>)</mo></mrow></mrow><mo>,</mo><mrow><mi>min</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup><mo>+</mo><mi>β</mi></mrow><mo>,</mo><mi>K</mi></mrow><mo>)</mo></mrow></mrow></mrow><mo>]</mo></mrow></mrow></mtd></mtr></mtable></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>60</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
WhereP(x<sub>n1</sub><sup>m</sup>Y<sub>n1</sub><sup>m</sup>) in equation (58) is the previous combination results. The initial condition P(x<sub>0</sub><sup>m</sup>Y<sub>0</sub><sup>1</sup>) can be set to be 1/(K+1)′ i.e., equal for any of the classes ({0, 1, 2, . . . , K}). P(y<sub>n</sub><sup>m</sup>Y<sub>n1</sub><sup>m</sup>) in the denominator is for normalization such that
<maths id="MATHUS00028" num="00028"><math overflow="scroll"><mrow><mrow><munderover><mo>∑</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow><mo>=</mo><mn>1.</mn></mrow></math></maths>
In summary, the Bayesbased level2 combination is executed as follows:
 1. Initialization:
<maths id="MATHUS00029" num="00029"><math overflow="scroll"><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mn>0</mn><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mn>0</mn><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mfrac><mn>1</mn><mrow><mi>K</mi><mo>+</mo><mn>1</mn></mrow></mfrac></mrow></math></maths>
for x<sub>0</sub><sup>m</sup>=0,1,2, . . . ,K;
 2. Upon the classification of the nth maneuver of the maneuver type m, calculate P(x<sub>n</sub><sup>m</sup>Y<sub>n1</sub><sup>m</sup>) for x<sub>n</sub><sup>m</sup>=0,1,2, . . . ,K based on equation (41);
 3. Calculate the nominator in equation (58): (P(y<sub>n</sub><sup>m</sup>x<sub>n</sub><sup>m</sup>)P(x<sub>n</sub><sup>m</sup>Y<sub>n1</sub><sup>m</sup>) for x<sub>m</sub><sup>m</sup>=0,1,2, . . . ,K;
 4. Calculate P(y<sub>n</sub><sup>m</sup>Y<sub>n1</sub><sup>m</sup>): P(y<sub>n</sub><sup>m</sup>Y<sub>n1</sub><sup>m</sup>)=Σ<sub>x m/n=0</sub><sup>K</sup>(P(y<sub>n</sub><sup>m</sup>x<sub>n</sub><sup>m</sup>)P(x<sub>n</sub><sup>m</sup>Y<sub>n1</sub><sup>m</sup>)); and
 5. Calculate the posterior probability
<maths id="MATHUS00030" num="00030"><math overflow="scroll"><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mfrac><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>y</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>y</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mfrac></mrow></math></maths>
for x<sub>n</sub><sup>m</sup>=0,1,2, . . . ,K.
The output of the level2 combiner is a vector [P(0Y<sub>n</sub><sup>m</sup>) P(1Y<sub>n</sub><sup>m</sup>) P(2Y<sub>n</sub><sup>m</sup>) . . . P(KY<sub>n</sub><sup>m</sup>)]. The class corresponding to the largest P(x<sub>n</sub><sup>m</sup>Y<sub>n</sub><sup>m</sup>) is regarded as the current driving skill:
<maths id="MATHUS00031" num="00031"><math overflow="scroll"><mtable><mtr><mtd><mrow><msubsup><mi>c</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><munder><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mi>max</mi></mrow><mrow><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo>=</mo><mn>0</mn></mrow><mo>,</mo><mn>1</mn><mo>,</mo><mrow><mi>…</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>K</mi></mrow></mrow></munder><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>m</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>m</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>61</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Similarly, Bayes' theorem can be applied to develop the level3 combiner. Upon the onset of a new maneuver, the level2 combiner outputs [P(0Y<sub>n</sub><sup>m</sup>) P(1Y<sub>n</sub><sup>m</sup>) P(2Y<sub>n</sub><sup>m</sup>) . . . P(KY<sub>n</sub><sup>m</sup>)]. The level3 combiner then calculates P(x<sub>n</sub> <o ostyle="single">Y</o><sub>n</sub>), where <o ostyle="single">Y</o><sub>n</sub>={Y<sub>n</sub><sup>1 </sup>Y<sub>n</sub><sup>2 </sup>. . . Y<sub>n</sub><sup>j </sup>. . . Y<sub>n</sub><sup>M</sup>} with Y<sub>n</sub><sup>m</sup>={y<sub>n</sub><sup>m </sup>Y<sub>n1</sub><sup>m</sup>}, Y<sub>n</sub><sup>j</sup>={Y<sub>n1</sub><sup>j</sup>} for j≠m, and M is the number of maneuver types used for the classification. Correspondingly, the rule to calculate P(x<sub>n</sub> <o ostyle="single">Y</o><sub>n</sub>) is:
<maths id="MATHUS00032" num="00032"><math overflow="scroll"><mtable><mtr><mtd><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>n</mi></msub><mo></mo><msub><mover><mi>Y</mi><mi>_</mi></mover><mi>n</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><mfrac><mrow><mrow><mo>(</mo><mrow><munderover><mo>∏</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>M</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>j</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow></msub><mo></mo><msub><mover><mi>Y</mi><mi>_</mi></mover><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow></msub></mrow><mo>)</mo></mrow></mrow></mrow><mrow><munderover><mo>∏</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>M</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow></mfrac><mo>×</mo><mi>normalization_scaler</mi></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>62</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where P(x<sub>n1</sub> <o ostyle="single">Y</o><sub>n1</sub>) is the previous results of the level3 combiner.
For j≠m, Y<sub>n</sub><sup>j</sup>=Y<sub>n1</sub><sup>j</sup>:
<maths id="MATHUS00033" num="00033"><math overflow="scroll"><mtable><mtr><mtd><mtable><mtr><mtd><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>j</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><mrow><msubsup><mi>x</mi><mi>n</mi><mi>j</mi></msubsup><mo></mo><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup></mrow><mo>,</mo><msubsup><mi>Y</mi><mi>n</mi><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow></mtd></mtr><mtr><mtd><mrow><mrow><mo>=</mo><mrow><munderover><mo>∑</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>j</mi></msubsup><mo></mo><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mrow><mo>,</mo></mrow></mtd></mtr></mtable></mtd><mtd><mrow><mo>(</mo><mn>63</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Where P(x<sub>n1</sub><sup>j</sup>Y<sub>n1</sub><sup>j</sup>) is based on the previous results from each individual level2 Combiner and P(x<sub>n</sub><sup>j</sup>x<sub>n1</sub><sup>j</sup>) is based on equation (59).
In summary, the level3 combination can be executed as follows:
1. Update P(x<sub>n</sub><sup>j</sup>Y<sub>n</sub><sup>j</sup>) based on equation (63) for j≠m, that is, for all the maneuver types other than the type corresponding to the latest maneuver, P(x<sub>n</sub><sup>m</sup>Y<sub>n</sub><sup>m</sup>) is provided by the level2 combiner corresponding to maneuver type m.
2. Calculate
<maths id="MATHUS00034" num="00034"><math overflow="scroll"><mrow><mrow><mi>B</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>n</mi></msub><mo></mo><msub><mover><mi>Y</mi><mi>_</mi></mover><mi>n</mi></msub></mrow><mo>)</mo></mrow></mrow><mo>=</mo><mfrac><mrow><mrow><mo>(</mo><mrow><munderover><mo>∏</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>M</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>j</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mi>n</mi><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow><mo>)</mo></mrow><mo></mo><mrow><mi>P</mi><mo>(</mo><mrow><msub><mi>x</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow></msub><mo></mo><msub><mover><mi>Y</mi><mi>_</mi></mover><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow></msub></mrow><mo>)</mo></mrow></mrow><mrow><munderover><mo>∏</mo><mrow><mi>j</mi><mo>=</mo><mn>1</mn></mrow><mi>M</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msubsup><mi>x</mi><mi>n</mi><mi>j</mi></msubsup><mo></mo><msubsup><mi>Y</mi><mrow><mi>n</mi><mo></mo><mn>1</mn></mrow><mi>j</mi></msubsup></mrow><mo>)</mo></mrow></mrow></mrow></mfrac></mrow></math></maths>
based on the previous results from individual level2 combiners P(x<sub>n1</sub><sup>j</sup>Y<sub>n1</sub><sup>j</sup>), and the previous result from the level3 combiner P(x<sub>n1</sub> <o ostyle="single">Y</o><sub>n1</sub>);
3. Calculate the normalization scaler:
<maths id="MATHUS00035" num="00035"><math overflow="scroll"><mtable><mtr><mtd><mrow><mi>normalization_scaler</mi><mo>=</mo><mrow><mfrac><mn>1</mn><mrow><munderover><mo>∑</mo><mrow><msub><mi>x</mi><mi>n</mi></msub><mo>=</mo><mn>0</mn></mrow><mi>K</mi></munderover><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mrow><mi>B</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>n</mi></msub><mo></mo><msub><mover><mi>Y</mi><mi>_</mi></mover><mi>n</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mfrac><mo>.</mo></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>64</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
4. Calculate the posterior probability:
<FORM>P(x<sub>n</sub> <o ostyle="single">Y</o><sub>n</sub>)=B(x<sub>n</sub> <o ostyle="single">Y</o><sub>n</sub>)×normalization_scaler (65)</FORM>
The output of the level3 combiner is also a vector [P(0 <o ostyle="single">Y</o><sub>n</sub>) P(1 <o ostyle="single">Y</o><sub>n</sub>) P(2 <o ostyle="single">Y</o><sub>n</sub>) . . . P(K <o ostyle="single">Y</o><sub>n</sub>)]. The class corresponding to the largest P(x<sub>n</sub> <o ostyle="single">Y</o><sub>n</sub>) is regarded as the current driving skill:
<maths id="MATHUS00036" num="00036"><math overflow="scroll"><mtable><mtr><mtd><mrow><msub><mi>c</mi><mi>n</mi></msub><mo>=</mo><mrow><munder><mrow><mi>arg</mi><mo></mo><mstyle><mspace width="0.3em" height="0.3ex"/></mstyle><mo></mo><mi>max</mi></mrow><msub><mi>x</mi><mrow><mrow><mi>n</mi><mo>=</mo><mn>0</mn></mrow><mo>,</mo><mn>1</mn><mo>,</mo><mrow><mi>…</mi><mo></mo><mstyle><mspace width="0.8em" height="0.8ex"/></mstyle><mo></mo><mi>K</mi></mrow></mrow></msub></munder><mo></mo><mrow><mi>P</mi><mo></mo><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>n</mi></msub><mo></mo><msub><mover><mi>Y</mi><mi>_</mi></mover><mi>n</mi></msub></mrow><mo>)</mo></mrow></mrow></mrow></mrow></mtd><mtd><mrow><mo>(</mo><mn>66</mn><mo>)</mo></mrow></mtd></mtr></mtable></math></maths>
Bayes' theorem can also be used to design an integrated level2 and level3 combination by following steps similar to those described above. Therefore, the details of the design and implementation are not included in this invention.
It is worth noting that though the combination disclosed in one embodiment of the invention is based on Bayes' theorem, other classifier combination and data fusion techniques, including voting, sum, mean, median, product, max/min, fuzzy integrals, DempsterShafter, mixture of local experts (MLEs), and neural networks, can also be employed in lieu of Bayes' theorem.
The foregoing discussion discloses and describes merely exemplary embodiments of the present invention. One skilled in the art will readily recognize from such discussion and from the accompanying drawings and claims that various changes, modifications and variations can be made therein without departing from the spirit and scope of the invention as defined in the following claims.