Video capturing device for predicting special driving situations
First Claim
1. A video device for predicting driving situations while a person drives a car, the video device comprising:
- multi-modal sensors and knowledge data for extracting feature maps;
a deep convolutional neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car; and
a user interface (UI) for displaying the real-time TSs and to warn of possible danger, wherein the real-time TSs are compared to predetermined TSs to predict the driving situations,wherein the training data is labeled semi-automatically by defining a set of constraints on sensory variables for each label, encoding each label into a set of rules, and employing the multi-modal sensors for which all rules are verified and assigned to a corresponding label.
2 Assignments
0 Petitions
Accused Products
Abstract
A video device for predicting driving situations while a person drives a car is presented. The video device includes multi-modal sensors and knowledge data for extracting feature maps, a deep neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car, and a user interface (UI) for displaying the real-time TSs. The real-time TSs are compared to predetermined TSs to predict the driving situations. The video device can be a video camera. The video camera can be mounted to a windshield of the car. Alternatively, the video camera can be incorporated into the dashboard or console area of the car. The video camera can calculate speed, velocity, type, and/or position information related to other cars within the real-time TS. The video camera can also include warning indicators, such as light emitting diodes (LEDs) that emit different colors for the different driving situations.
6 Citations
17 Claims
-
1. A video device for predicting driving situations while a person drives a car, the video device comprising:
-
multi-modal sensors and knowledge data for extracting feature maps; a deep convolutional neural network trained with training data to recognize real-time traffic scenes (TSs) from a viewpoint of the car; and a user interface (UI) for displaying the real-time TSs and to warn of possible danger, wherein the real-time TSs are compared to predetermined TSs to predict the driving situations, wherein the training data is labeled semi-automatically by defining a set of constraints on sensory variables for each label, encoding each label into a set of rules, and employing the multi-modal sensors for which all rules are verified and assigned to a corresponding label. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method for predicting driving situations while a person drives a car, the method comprising:
-
extracting feature maps from multi-modal sensors and knowledge data; training a deep convolutional neural network, with training data, to recognize real-time traffic scenes (TSs) from a viewpoint of the car; displaying the real-time TSs on a user interface (UI) to warn of possible dangers; and comparing the real-time TSs to predetermined TSs to predict the driving situations, wherein the training data is labeled semi-automatically by defining a set of constraints on sensory variables for each label, encoding each label into a set of rules, and using the multi-modal sensors for which all rules are verified and assigned to a corresponding label. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
-
17. A non-transitory computer-readable storage medium comprising a computer-readable program for predicting, by employing a video device, driving situations while a person drives a car, wherein the computer-readable program when executed on a computer causes the computer to perform the steps of:
-
extracting feature maps from multi-modal sensors and knowledge data; training a deep convolutional neural network, with training data, to recognize real-time traffic scenes (TSs) from a viewpoint of the car; displaying the real-time TSs on a user interface (UI) to warn of possible dangers; and comparing the real-time TSs to predetermined TSs to predict the driving situations, wherein the training data is labeled semi-automatically by defining a set of constraints on sensory variables for each label, encoding each label into a set of rules, and employing the multi-modal sensors for which all rules are verified and assigned to a corresponding label.
-
Specification