SYSTEM AND METHOD FOR EXPRESSIVE LANGUAGE, DEVELOPMENTAL DISORDER, AND EMOTION ASSESSMENT
First Claim
1. A method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute a method comprising:
- (a) segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments;
(b) determining which of the plurality of recording segments correspond to a key child;
(c) determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings;
(d) extracting acoustic parameters of the key child recordings;
(e) comparing the acoustic parameters of the key child recordings to known acoustic parameters for children; and
(f) determining a likelihood of autism.
2 Assignments
0 Petitions
Accused Products
Abstract
In one embodiment, the system and method for expressive language development; a method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination; and the computer programmed to execute a method that includes segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments. The method further includes determining which of the plurality of recording segments correspond to a key child. The method also includes extracting acoustic parameters of the key child recordings and comparing the acoustic parameters of the key child recordings to known acoustic parameters for children. The method returns a determination of a likelihood of autism.
-
Citations
42 Claims
-
1. A method for detecting autism in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute a method comprising:
-
(a) segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality recording segments; (b) determining which of the plurality of recording segments correspond to a key child; (c) determining which of the plurality of recording segments that correspond to the key child are classified as key child recordings; (d) extracting acoustic parameters of the key child recordings; (e) comparing the acoustic parameters of the key child recordings to known acoustic parameters for children; and (f) determining a likelihood of autism. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A method for detecting autism, comprising:
transforming an audio recording to output an indication of autism on an output mechanism selected from the group consisting of a display, a printing device, an electronic storage device, and an audio output device; and
the transforming of the audio recording performed by comparing it to a model developed by analyzing the transparent parameters of a plurality of sound recordings captured in a natural language environment.- View Dependent Claims (16, 17, 18, 19, 20, 21, 22)
-
23. A method for detecting a disorder in a natural language environment using a microphone, sound recorder, and a computer programmed with software for the specialized purpose of processing recordings captured by the microphone and sound recorder combination, the computer programmed to execute a method comprising:
-
(a) segmenting an audio signal captured by the microphone and sound recorder combination using the computer programmed for the specialized purpose into a plurality of recording segments; (b) determining which of the plurality of recording segments correspond to a key subject; (c) determining which of the plurality of recording segments that correspond to the key subject are classified as key subject recordings; (d) extracting acoustic parameters of the key subject recordings; (e) comparing the acoustic parameters of the key subject recordings to known acoustic parameters for subjects; and (f) determining a likelihood of the disorder. - View Dependent Claims (24)
-
-
25. A method for detecting a disorder, comprising:
-
transforming an audio recording to output an indication of autism on an output mechanism selected from the group consisting of a display, a printing device, an electronic storage device, and an audio output device, the transforming of the audio recording performed by comparing it to a model developed by analyzing the transparent parameters of a plurality of sound recordings captured in a natural language environment, wherein in the case of each of the plurality of sound recordings, the analyzing includes; (a) segmenting the sound recording into a plurality of recording segments, wherein the sound recording is captured by a microphone and sound recorder combination; (b) determining which of the plurality of recording segments correspond to a key subject; (c) determining which of the plurality of recording segments that correspond to the key subject are classified as key subject recordings; and (d) extracting acoustic parameters of the key subject recordings.
-
-
26. A method of creating an automatic language characteristic recognition system, the method comprising:
-
(a) receiving a plurality of audio recordings; (b) segmenting each of the plurality of audio recordings to create a plurality of audio segments for each audio recording; and (c) clustering each audio segment of the plurality of audio recordings according to audio characteristics of each audio segment to form a plurality of audio segment clusters. - View Dependent Claims (27, 28)
-
-
29. A method of decoding speech using an automatic language characteristic recognition system, the method comprising:
-
(a) receiving a plurality of audio recordings; (b) segmenting each of the plurality of audio recordings to create a first plurality of audio segments for each audio recording; (c) clustering each audio segment of the plurality of audio recordings according to audio characteristics of each audio segment to form a plurality of audio segment clusters; (d) receiving a new audio recording; (e) segmenting the new audio recording to create a second plurality of audio segments for the new audio recording; and (f) determining to which cluster of the plurality of audio segment clusters each segment of the second plurality of audio segments corresponds. - View Dependent Claims (30, 31)
-
-
32. A method of assessing a key child'"'"'s expressive language development, comprising:
-
(a) processing an audio recording taken in the key child'"'"'s language environment to identify segments of the recording that correspond to the key child'"'"'s vocalizations; (b) applying an adult automatic speech recognition phone decoder to the segments to identify each occurrence of each of a plurality of bi-phone categories, wherein each of the bi-phone categories corresponds to a pre-defined speech sound sequence; (c) determining a distribution for the bi-phone categories; and (d) using the distribution in an age-based model to assess the key child'"'"'s expressive language development. - View Dependent Claims (33, 34, 35)
-
-
36. A system for of assessing a key child'"'"'s language development, comprising:
-
(a) a processor-based device comprising an application having an audio engine for processing an audio recording taken in the key child'"'"'s language environment to identify segments of the recording that correspond to the key child'"'"'s vocalizations; (b) an adult automatic speech recognition phone decoder for processing the segments that correspond to the key child'"'"'s vocalizations to identify each occurrence of each of a plurality of bi-phone categories, wherein each of the bi-phone categories corresponds to a pre-defined speech sound sequence; and (c) an expressive language assessment component for determining a distribution for the bi-phone categories and using the distribution in an age-based model to assess the key child'"'"'s expressive language development, wherein the age-based model is selected based on the key child'"'"'s chronological age, and the age-based model includes a weight associated with each of the bi-phone categories. - View Dependent Claims (37, 38, 39)
-
-
40. A method of determining the emotion of an utterance, the method comprising:
-
(a) receiving the utterance at a processor-based device comprising an application having an audio engine; (b) extracting emotion-related acoustic features from the utterance; (c) comparing the emotion-related acoustic features to a plurality of models representative of emotions; (d) selecting a model from the plurality of models based on the comparing of (c); and (e) outputting the emotion corresponding to the selected model. - View Dependent Claims (41, 42)
-
Specification