System and method for continuous multimodal speech and gesture interaction
First Claim
1. A method comprising:
- monitoring an audio stream associated with a non-tactile gesture input stream;
identifying a speech event, in the audio stream, from a first user;
determining a temporal window associated with a time of the speech event, wherein the temporal window extends forward and backward from the time of the speech event;
analyzing, via a processor, data from the non-tactile gesture input stream within the temporal window to identify, based on the speech event, a non-tactile gesture event;
identifying clarifying information, in the audio stream, about the speech event from a second user;
applying the clarifying information to the speech event to yield a clarification; and
processing, based on the clarification, the speech event and the non-tactile gesture event to produce a multimodal command.
3 Assignments
0 Petitions
Accused Products
Abstract
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
32 Citations
20 Claims
-
1. A method comprising:
-
monitoring an audio stream associated with a non-tactile gesture input stream; identifying a speech event, in the audio stream, from a first user; determining a temporal window associated with a time of the speech event, wherein the temporal window extends forward and backward from the time of the speech event; analyzing, via a processor, data from the non-tactile gesture input stream within the temporal window to identify, based on the speech event, a non-tactile gesture event; identifying clarifying information, in the audio stream, about the speech event from a second user; applying the clarifying information to the speech event to yield a clarification; and processing, based on the clarification, the speech event and the non-tactile gesture event to produce a multimodal command. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A system comprising:
-
a processor; and a computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations comprising; monitoring an audio stream associated with a non-tactile gesture input stream; identifying a speech event, in the audio stream, from a first user; determining a temporal window associated with a time of the speech event, wherein the temporal window extends forward and backward from the time of the speech event; analyzing data from the non-tactile gesture input stream within the temporal window to identify, based on the speech event, a non-tactile gesture event; identifying clarifying information, in the audio stream, about the speech event from a second user; applying the clarifying information to the speech event to yield a clarification; and processing, based on the clarification, the speech event and the non-tactile gesture event to produce a multimodal command. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A computer-readable storage device having instructions stored which, when executed by a computing device, cause the computing device to perform operations comprising:
-
monitoring an audio stream associated with a non-tactile gesture input stream; identifying a speech event, in the audio stream, from a first user; determining a temporal window associated with a time of the speech event, wherein the temporal window extends forward and backward from the time of the speech event; analyzing data from the non-tactile gesture input stream within the temporal window to identify, based on the speech event, a non-tactile gesture event; identifying clarifying information, in the audio stream, about the speech event from a second user; applying the clarifying information to the speech event to yield a clarification; and processing, based on the clarification, the speech event and the non-tactile gesture event to produce a multimodal command.
-
Specification