System and method for providing augmented reality based directions based on verbal and gestural cues
First Claim
1. A method for providing augmented reality based directions, comprising:
- receiving a voice input based on verbal cues provided by a passenger in a vehicle;
receiving a gesture input and a gaze input based on gestural cues and gaze cues provided by the passenger in the vehicle, wherein the gaze input of the passenger includes an object in a surrounding environment and an associated focal plane for the object, wherein the gaze input of the passenger is indicative of where the passenger is looking;
determining directives based on the voice input, the gesture input and the gaze input;
associating the directives with the surrounding environment of the vehicle;
generating augmented reality graphical elements based on the directives and the association of the directives with the surrounding environment of the vehicle; and
displaying, for a driver of the vehicle, the augmented reality graphical elements on a heads-up display system of the vehicle using an actuator to move a projector based on a location of the object in the surrounding environment to project the augmented reality graphical elements at the focal plane associated with the object, wherein the augmented reality graphical elements are displayed at the focal plane associated with the object from the gaze input of the passenger where the passenger is looking.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and system for providing augmented reality based directions. The method and system include receiving a voice input based on verbal cues provided by one or more vehicle occupants in a vehicle. The method and system also include receiving a gesture input and a gaze input based on gestural cues and gaze cues provided by the one or more vehicle occupants in the vehicle. The method and system additionally include determining directives based on the voice input, the gesture input and the gaze input and associating the directives with the surrounding environment of the vehicle. Additionally, the method and system include generating augmented reality graphical elements based on the directives and the association of the directives with the surrounding environment of the vehicle. The method and system further include displaying the augmented reality graphical elements on a heads-up display system of the vehicle.
-
Citations
20 Claims
-
1. A method for providing augmented reality based directions, comprising:
-
receiving a voice input based on verbal cues provided by a passenger in a vehicle; receiving a gesture input and a gaze input based on gestural cues and gaze cues provided by the passenger in the vehicle, wherein the gaze input of the passenger includes an object in a surrounding environment and an associated focal plane for the object, wherein the gaze input of the passenger is indicative of where the passenger is looking; determining directives based on the voice input, the gesture input and the gaze input; associating the directives with the surrounding environment of the vehicle; generating augmented reality graphical elements based on the directives and the association of the directives with the surrounding environment of the vehicle; and displaying, for a driver of the vehicle, the augmented reality graphical elements on a heads-up display system of the vehicle using an actuator to move a projector based on a location of the object in the surrounding environment to project the augmented reality graphical elements at the focal plane associated with the object, wherein the augmented reality graphical elements are displayed at the focal plane associated with the object from the gaze input of the passenger where the passenger is looking. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system for providing augmented reality based directions, comprising,
a voice recognition subsystem that provides a voice input based on verbal cues provided by a passenger in a vehicle; -
a gesture recognition subsystem that provides a gesture input and a gaze input based on gestural cues and gaze cues provided by the passenger in the vehicle, wherein the gaze input of the passenger includes an object in a surrounding environment and an associated focal plane for the object, wherein the gaze input of the passenger is indicative of where the passenger is looking; a command interpreter, implemented via a processor, that determines directives based on the voice input provided by the voice recognition subsystem, and the gesture input and the gaze input provided by the gesture recognition subsystem and associates the directives with the surrounding environment of the vehicle, wherein the command interpreter generates augmented reality graphical elements based on the directives and the association of the directives with the surrounding environment of the vehicle, and a heads up display system displays the augmented reality graphical elements generated by the command interpreter for a driver of the vehicle using an actuator to move a projector based on a location of the object in the surrounding environment to project the augmented reality graphical elements at the focal plane associated with the object, wherein the augmented reality graphical elements are displayed, for the driver of the vehicle, at the focal plane associated with the object from the gaze input of the passenger where the passenger is looking. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer readable-storage medium storing instructions that when executed by a processor perform actions, comprising:
-
receiving a voice input from a passenger in a vehicle; receiving a gesture input and a gaze input from the passenger in the vehicle, wherein the gaze input of the passenger includes an object in a surrounding environment and an associated focal plane for the object, wherein the gaze input of the passenger is indicative of where the passenger is looking; determining directives based on the voice input, the gesture input and the gaze input; associating the directives with the surrounding environment of the vehicle; generating augmented reality graphical elements based on the directives and the association of the directives with the surrounding environment of the vehicle; and displaying, for a driver of the vehicle, the augmented reality graphical elements on a heads-up display system of the vehicle using an actuator to move a projector based on a location of the object in the surrounding environment to project the augmented reality graphical elements at the focal plane associated with the object, wherein the augmented reality graphical elements are displayed at the focal plane associated with the object from the gaze input of the passenger where the passenger is looking. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification