METHOD TO ANALYZE ATTENTION MARGIN AND TO PREVENT INATTENTIVE AND UNSAFE DRIVING
First Claim
1. A method comprising:
- a computer device receiving extracted features from a driver-facing camera and from a road as viewed by a road-facing camera;
the computer device further receiving extracted features reflecting the driver'"'"'s behavior including head and eyes movement, speech and gestures;
the computer device further receiving extracted telemetry features from a vehicle;
the computer device still further receiving extracted features reflecting the driver'"'"'s biometrics; and
a decision engine receiving information from the computer device representing each of the extracted features of the driver, wherein a driver'"'"'s attention and emotional state is determined to evaluate risks associated to moving vehicles and the driver'"'"'s ability to deal with any projected risks.
1 Assignment
0 Petitions
Accused Products
Abstract
A method includes a computing device a computer device receiving extracted features from a driver-facing camera and from a road as viewed by a road-facing camera; the computer device further receiving extracted features reflecting the driver'"'"'s behavior including head and eyes movement, speech and gestures; the computer device further receiving extracted telemetry features from a vehicle; the computer device still further receiving extracted features reflecting the driver'"'"'s biometrics; and a decision engine receiving information from the computer device representing each of the extracted features of the driver, wherein a driver'"'"'s attention and emotional state is determined to evaluate risks associated to moving vehicles and the driver'"'"'s ability to deal with any projected risks.
133 Citations
20 Claims
-
1. A method comprising:
-
a computer device receiving extracted features from a driver-facing camera and from a road as viewed by a road-facing camera; the computer device further receiving extracted features reflecting the driver'"'"'s behavior including head and eyes movement, speech and gestures; the computer device further receiving extracted telemetry features from a vehicle; the computer device still further receiving extracted features reflecting the driver'"'"'s biometrics; and a decision engine receiving information from the computer device representing each of the extracted features of the driver, wherein a driver'"'"'s attention and emotional state is determined to evaluate risks associated to moving vehicles and the driver'"'"'s ability to deal with any projected risks. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A driver assistant system comprising:
-
one or more processors, one or more computer-readable memories and one or more computer-readable, tangible storage devices; a driver'"'"'s attention state module operatively coupled to at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories, configured to receive extracted features for a driver from a camera facing the driver and from a road as viewed by a camera facing the road; the driver'"'"'s attention module operatively coupled to at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories, is further configured to receive extracted features reflecting the driver'"'"'s facial and hand gestures, and speech; the driver'"'"'s attention module operatively coupled to at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories, is further configured to receive extracted features reflecting the driver'"'"'s biometrics; and a decision engine module operatively coupled to at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories, configured to receive information from the driver'"'"'s attention module representing each of the extracted features of the driver, wherein a driver'"'"'s attention, wellness and emotional state is determined. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A virtual co-pilot method comprising:
-
an image processor receiving images from a camera facing a driver; the image processor receiving scans from an infrared scanner facing the driver; road facing camera a speech engine receiving speech from the driver using a microphone; and biosensors providing biometric data from the driver to a processing unit, wherein the processing unit uses machine learning to dynamically evaluate risks associated by the received images and scans from the image processor, the received speech from the speech engine and the provided biometric data from the biosensors to determine a driver'"'"'s attention, emotional state and fatigue. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification