Identifying and generating audio cohorts based on audio data input
First Claim
1. A computer implemented method of generating audio cohorts, computer implemented method comprising:
- receiving audio data from a set of audio sensors, by an audio analysis engine, wherein the audio data is associated with a plurality of objects, and wherein the audio data comprises a set of audio patterns;
processing the audio data to identify audio attributes associated with the plurality of objects to form digital audio data, wherein the digital audio data comprises metadata describing the audio attributes of the set of objects, wherein the audio attributes of the audio data comprise an identification of a sound, wherein the sound is identified as human speech, and wherein the audio attributes of the audio data identify one of sound identifications from a group consisting of a language spoken, a regional dialect associated with the human speech, an accent associated with the human speech, an identification of whether the speaker is male or female, an identification of words spoken in the human speech, and an identification of a vocalized breathing sound; and
generating a set of audio cohorts using the audio attributes associated with the digital audio data and cohort criteria, wherein each audio cohort in the set of audio cohorts comprises a set of objects from the plurality of objects that share at least one audio attribute in common.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer implemented method, apparatus, and computer program product for generating audio cohorts. An audio analysis engine receives audio data from a set of audio input devices. The audio data is associated with a plurality of objects. The audio data comprises a set of audio patterns. The audio data is processed to identify attributes of the audio data to form digital audio data. The digital audio data comprises metadata describing the attributes of the audio data. A set of audio cohorts is generated using the digital audio data and cohort criteria. Each audio cohort in the set of audio cohorts comprises a set of objects from the plurality of objects that share at least one audio attribute in common.
84 Citations
17 Claims
-
1. A computer implemented method of generating audio cohorts, computer implemented method comprising:
-
receiving audio data from a set of audio sensors, by an audio analysis engine, wherein the audio data is associated with a plurality of objects, and wherein the audio data comprises a set of audio patterns; processing the audio data to identify audio attributes associated with the plurality of objects to form digital audio data, wherein the digital audio data comprises metadata describing the audio attributes of the set of objects, wherein the audio attributes of the audio data comprise an identification of a sound, wherein the sound is identified as human speech, and wherein the audio attributes of the audio data identify one of sound identifications from a group consisting of a language spoken, a regional dialect associated with the human speech, an accent associated with the human speech, an identification of whether the speaker is male or female, an identification of words spoken in the human speech, and an identification of a vocalized breathing sound; and generating a set of audio cohorts using the audio attributes associated with the digital audio data and cohort criteria, wherein each audio cohort in the set of audio cohorts comprises a set of objects from the plurality of objects that share at least one audio attribute in common. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A computer program product for generating audio cohort data, the computer program product comprising:
a computer usable storage device having computer usable program code embodied therewith, the computer usable program code comprising; computer usable program code configured to receive audio data from a set of audio sensors, by an audio analysis engine, wherein the audio data is associated with a plurality of objects, and wherein the audio data comprises a set of audio patterns; computer usable program code configured to process the audio data to identify audio attributes associated with the plurality of objects to form digital audio data, wherein the digital audio data comprises metadata describing the audio attributes of the set of objects, wherein the audio attributes of the audio data comprise an identification of a sound, wherein the sound is identified as human speech, and wherein the audio attributes of the audio data identify one of sound identifications from a group consisting of a language spoken, a regional dialect associated with the human speech, an accent associated with the human speech, an identification of whether the speaker is male or female, an identification of words spoken in the human speech, and an identification of a vocalized breathing sound; computer usable program code configured to generate a set of audio cohorts using the audio attributes associated with the digital audio data and cohort criteria, wherein each audio cohort in the set of audio cohorts comprises a set of objects from the plurality of objects that share at least one audio attribute in common; computer usable program code configured to receive the set of audio cohorts by an inference engine; and computer usable program code configured to process the set of audio cohorts by the inference engine, wherein the inference engine uses the set of audio cohorts to generate a set of inferences, and wherein the set of inferences predict a future event. - View Dependent Claims (12, 13, 14)
-
15. An apparatus comprising:
-
a bus system; a communications system coupled to the bus system; a memory connected to the bus system, wherein the memory includes computer usable program code; and a processing unit coupled to the bus system, wherein the processing unit executes the computer usable program code to receive audio data from a set of audio sensors, by an audio analysis engine, wherein the audio data is associated with a plurality of objects, and wherein the audio data comprises a set of audio patterns;
process the audio data to identify audio attributes associated with the plurality of objects to form digital audio data, wherein the digital audio data comprises metadata describing the audio attributes of the set of objects, wherein the audio attributes of the audio data comprise an identification of a sound, wherein the sound is identified as human speech, and wherein the audio attributes of the audio data identify one of sound identifications from a group consisting of a language spoken, a regional dialect associated with the human speech, an accent associated with the human speech, an identification of whether the speaker is male or female, an identification of words spoken in the human speech, and an identification of a vocalized breathing sound;
generate a set of audio cohorts using the audio attributes associated with the digital audio data and cohort criteria, wherein each audio cohort in the set of audio cohorts comprises a set of objects from the plurality of objects that share at least one audio attribute in common;
receive the set of audio cohorts by an inference engine; and
process the set of audio cohorts by the inference engine, wherein the inference engine uses the set of audio cohorts to generate a set of inferences, and wherein the set of inferences predict a future event. - View Dependent Claims (16)
-
-
17. An audio cohort generation system comprising:
-
a set of audio sensors, wherein the set of audio sensors comprises a microphone; a data processing system, wherein the data processing system comprises; an audio analysis engine, wherein the audio analysis engine receives audio data from a set of audio sensors, wherein the audio data is in an analog format, and wherein the audio data identifies a set of audio patterns associated with a plurality of objects;
processes the audio data and identify audio attributes of the audio data to form digital audio data, wherein the digital audio data comprises metadata describing the audio attributes associated with the plurality of objects, wherein the audio attributes of the audio data comprise an identification of a sound, wherein the sound is identified as human speech, and wherein the audio attributes of the audio data identify one of sound identifications from a group consisting of a language spoken, a regional dialect associated with the human speech, an accent associated with the human speech, an identification of whether the speaker is male or female, an identification of words spoken in the human speech, and an identification of a vocalized breathing sound;a cohort generation engine, wherein the cohort generation engine generates a set of audio cohorts using the digital audio data and cohort criteria, wherein each audio cohort in the set of audio cohorts comprises a set of objects from the plurality of objects that share at least one audio attribute in common; and an inference engine, wherein the inference engine uses the set of audio cohorts to generate a set of inferences, and wherein the set of inferences predict a future event.
-
Specification