System and method for continuous multimodal speech and gesture interaction
First Claim
1. A method comprising:
- continuously monitoring an audio stream associated with a non-tactile gesture input stream;
identifying a first speech event in the audio stream, the first speech event being from a first user;
identifying a second speech event in the audio stream, the second speech event being from a second user;
identifying a temporal window associated with times of the first speech event and the second speech event, wherein the temporal window extends forward and backward from the times of the first speech event and the second speech event;
analyzing, via a processor, data from the non-tactile gesture input stream within the temporal window to identify a non-tactile gesture event; and
processing the first speech event, the second speech event, and the non-tactile gesture event to produce a single multimodal command.
3 Assignments
0 Petitions
Accused Products
Abstract
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for processing multimodal input. A system configured to practice the method continuously monitors an audio stream associated with a gesture input stream, and detects a speech event in the audio stream. Then the system identifies a temporal window associated with a time of the speech event, and analyzes data from the gesture input stream within the temporal window to identify a gesture event. The system processes the speech event and the gesture event to produce a multimodal command. The gesture in the gesture input stream can be directed to a display, but is remote from the display. The system can analyze the data from the gesture input stream by calculating an average of gesture coordinates within the temporal window.
-
Citations
20 Claims
-
1. A method comprising:
-
continuously monitoring an audio stream associated with a non-tactile gesture input stream; identifying a first speech event in the audio stream, the first speech event being from a first user; identifying a second speech event in the audio stream, the second speech event being from a second user; identifying a temporal window associated with times of the first speech event and the second speech event, wherein the temporal window extends forward and backward from the times of the first speech event and the second speech event; analyzing, via a processor, data from the non-tactile gesture input stream within the temporal window to identify a non-tactile gesture event; and processing the first speech event, the second speech event, and the non-tactile gesture event to produce a single multimodal command. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 16)
-
-
12. A system comprising:
-
a processor; a computer-readable storage medium having instructions stored which, when executed by the processor, cause the processor to perform operations comprising; continuously monitoring an audio stream associated with a non-tactile gesture input stream; identifying a first speech event in the audio stream, the first speech event being from a first user; identifying a second speech event in the audio stream, the second speech event being from a second user; identifying a temporal window associated with times of the first speech event and the second speech event, wherein the temporal window extends forward and backward from the times of the first speech event and the second speech event; analyzing, via a processor, data from the non-tactile gesture input stream within the temporal window to identify a non-tactile gesture event; and processing the first speech event, the second speech event, and the non-tactile gesture event to produce a single multimodal command. - View Dependent Claims (13, 14, 15)
-
-
17. A non-transitory computer-readable storage device having instructions stored which, when executed by a computing device, cause the computing device to perform operations comprising:
-
continuously monitoring an audio stream associated with a non-tactile gesture input stream; identifying a first speech event in the audio stream, the first speech event being from a first user; identifying a second speech event in the audio stream, the second speech event being from a second user; identifying a temporal window associated with times of the first speech event and the second speech event, wherein the temporal window extends forward and backward from the times of the first speech event and the second speech event; analyzing, via a processor, data from the non-tactile gesture input stream within the temporal window to identify a non-tactile gesture event; and processing the first speech event, the second speech event, and the non-tactile gesture event to produce a single multimodal command. - View Dependent Claims (18, 19, 20)
-
Specification