Systems and methods for mitigating vigilance decrement while maintaining readiness using augmented reality in a vehicle
First Claim
1. A vigilance system for mitigating vigilance decrement of an operator in a vehicle, comprising:
- one or more processors;
a memory communicably coupled to the one or more processors and storing;
a monitoring module including instructions that when executed by the one or more processors cause the one or more processors to monitor the operator by collecting operator state information using at least one sensor of the vehicle;
a vigilance module including instructions that when executed by the one or more processors cause the one or more processors to compute an engagement level of the operator according to a vigilance model and the operator state information to characterize an extent of vigilance decrement presently experienced by the operator; and
a rendering module including instructions that when executed by the one or more processors cause the one or more processors to render, on an augmented reality (AR) display in the vehicle, at least one graphical element as a function of the engagement level by dynamically adjusting characteristics of how the at least one graphical element is rendered according to at least changes in the engagement level to induce vigilance within the operator with respect to autonomous operation of the vehicle and a present operating environment around the vehicle, and to facilitate preventing the operator from becoming disengaged from the autonomous operation and the present operating environment,wherein the rendering module further includes instructions to render the at least one graphical element by generating the at least one graphical element from sensor data about the present operating environment that is external to the vehicle,wherein the monitoring module, the vigilance module, and the rendering module execute on the one or more processors in parallel using a feedback loop to dynamically update the operator state information, to iteratively compute the engagement level and to adjust the rendering of the at least one graphical element according to changes observed in the engagement level,wherein the rendering module further includes instructions to render the at least one graphical element by rendering the at least one graphical element with visual characteristics that are varied based, at least in part, on the changes in the engagement level over time, wherein the visual characteristics that are varied include at least an intensity of a graphical overlay of the present operating environment,wherein the vehicle is operating according to at least a supervised autonomy standard while the monitoring module, the vigilance module, and the rendering module are executing in parallel, and wherein the rendering module further includes instructions to render the at least one graphical element to maintain readiness of the operator for when the vehicle performs a handover of manual control to the operator by providing visual cues to the operator through changing the at least one graphical element.
3 Assignments
0 Petitions
Accused Products
Abstract
System, methods, and other embodiments described herein relate to mitigating vigilance decrement of a vehicle operator. In one embodiment, a method includes monitoring the operator by collecting operator state information using at least one sensor of the vehicle. The method includes computing an engagement level of the operator according to a vigilance model and the operator state information to characterize an extent of vigilance decrement presently experienced by the operator. The method includes rendering, on an augmented reality (AR) display, at least one graphical element as a function of the engagement level to induce the operator to maintain vigilance with respect to operation of the vehicle and a present operating environment around the vehicle.
29 Citations
17 Claims
-
1. A vigilance system for mitigating vigilance decrement of an operator in a vehicle, comprising:
-
one or more processors; a memory communicably coupled to the one or more processors and storing; a monitoring module including instructions that when executed by the one or more processors cause the one or more processors to monitor the operator by collecting operator state information using at least one sensor of the vehicle; a vigilance module including instructions that when executed by the one or more processors cause the one or more processors to compute an engagement level of the operator according to a vigilance model and the operator state information to characterize an extent of vigilance decrement presently experienced by the operator; and a rendering module including instructions that when executed by the one or more processors cause the one or more processors to render, on an augmented reality (AR) display in the vehicle, at least one graphical element as a function of the engagement level by dynamically adjusting characteristics of how the at least one graphical element is rendered according to at least changes in the engagement level to induce vigilance within the operator with respect to autonomous operation of the vehicle and a present operating environment around the vehicle, and to facilitate preventing the operator from becoming disengaged from the autonomous operation and the present operating environment, wherein the rendering module further includes instructions to render the at least one graphical element by generating the at least one graphical element from sensor data about the present operating environment that is external to the vehicle, wherein the monitoring module, the vigilance module, and the rendering module execute on the one or more processors in parallel using a feedback loop to dynamically update the operator state information, to iteratively compute the engagement level and to adjust the rendering of the at least one graphical element according to changes observed in the engagement level, wherein the rendering module further includes instructions to render the at least one graphical element by rendering the at least one graphical element with visual characteristics that are varied based, at least in part, on the changes in the engagement level over time, wherein the visual characteristics that are varied include at least an intensity of a graphical overlay of the present operating environment, wherein the vehicle is operating according to at least a supervised autonomy standard while the monitoring module, the vigilance module, and the rendering module are executing in parallel, and wherein the rendering module further includes instructions to render the at least one graphical element to maintain readiness of the operator for when the vehicle performs a handover of manual control to the operator by providing visual cues to the operator through changing the at least one graphical element. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A non-transitory computer-readable medium for mitigating vigilance decrement of an operator in a vehicle and storing instructions that when executed by one or more processors cause the one or more processors to:
-
monitor the operator by collecting operator state information using at least one sensor of the vehicle; compute an engagement level of the operator according to a vigilance model and the operator state information to characterize an extent of vigilance decrement presently experienced by the operator; and render, on an augmented reality (AR) display in the vehicle, at least one graphical element as a function of the engagement level by dynamically adjusting characteristics of how the at least one graphical element is rendered according to at least changes in the engagement level to induce vigilance within the operator with respect to autonomous operation of the vehicle and a present operating environment around the vehicle, and to facilitate preventing the operator from becoming disengaged from the autonomous operation and the present operating environment, wherein the instructions to render the at least one graphical element include instructions to generate the at least one graphical element from sensor data about the present operating environment that is external to the vehicle, wherein the instructions to render the at least one graphical element include instructions to render the at least one graphical element with visual characteristics that are varied based, at least in part, on the changes in the engagement level over time, and as a graphical overlay of the present operating environment that is varied in intensity according to the changes in the engagement level, wherein the instructions to monitor, compute, and render execute on the one or more processors in parallel using a feedback loop to dynamically update the operator state information, to iteratively compute the engagement level, and to adjust the rendering of the at least one graphical element according to the changes observed in the engagement level, wherein the vehicle is operating according to at least a supervised autonomy standard while the instructions to monitor, compute, and render execute on the one or more processors in parallel, and wherein the instructions to render the at least one graphical element maintain readiness of the operator for when the vehicle performs a handover of manual control to the operator by providing visual cues to the operator through changing the at least one graphical element. - View Dependent Claims (9, 10, 11)
-
-
12. A method of mitigating vigilance decrement of an operator in a vehicle with an augmented reality (AR) display, comprising:
-
monitoring the operator by collecting operator state information using at least one sensor of the vehicle; computing an engagement level of the operator according to a vigilance model and the operator state information to characterize an extent of vigilance decrement presently experienced by the operator; and rendering, on the AR display, at least one graphical element as a function of the engagement level by dynamically adjusting characteristics of how the at least one graphical element is rendered according to at least changes in the engagement level to induce vigilance within the operator with respect to autonomous operation of the vehicle and a present operating environment around the vehicle, and to facilitate preventing the operator from becoming disengaged from the autonomous operation and the present operating environment, wherein rendering the at least one graphical element includes generating the at least one graphical element from sensor data about the present operating environment that is external to the vehicle, wherein monitoring the operator, computing the engagement level, and rendering the at least one graphical element execute in parallel using a feedback loop of updated operator state information from the at least one sensor to iteratively compute the engagement level and adjust the rendering of the at least one graphical element according to the vigilance induced in the operator, wherein rendering the at least one graphical element includes rendering the at least one graphical element with visual characteristics that are varied based, at least in part, on the changes in the engagement level over time, and as a graphical overlay of the present operating environment that is varied in intensity according to the changes in the engagement level, wherein the vehicle is operating according to at least a supervised autonomy standard while the monitoring, computing, and rendering are executing in parallel, and wherein rendering the at least one graphical element maintains readiness of the operator for when the vehicle performs a handover of manual control to the operator by providing visual cues to the operator through changing the at least one graphical element. - View Dependent Claims (13, 14, 15, 16, 17)
-
Specification