Conversational speech analysis method, and conversational speech analyzer
First Claim
Patent Images
1. A conversational speech analyzing system comprising:
- a first microphone and a second microphone, each configured to capture speech data in an area where a meeting is being held;
a first sensor and a second sensor, each configured to capture sensor information in the area where the meeting is being held; and
a computer connected to the first and second microphones and the first and second sensors;
wherein the first microphone and the first sensor are connected to, or in proximity to, a first person, and the second microphone and the second sensor are connected to, or in proximity to, a second person;
wherein the computer is configured to store first speech data captured by the first microphone, second speech data captured by the second microphone, first sensor information captured by the first sensor, and second sensor information captured by the second sensor;
wherein the computer is configured to classify the first speech data captured from the first microphone as first speech frames when speech is detected, and as first nonspeech frames when speech is not detected;
wherein the computer is configured to divide the second sensor information based on the first speech frames and the first nonspeech frames, andwherein the computer is configured to evaluate an interest level of the second person in the meeting by comparing characteristics of the second sensor information divided based on the first speech frames to characteristics of the second sensor information divided based on the first nonspeech frames.
1 Assignment
0 Petitions
Accused Products
Abstract
The invention provides a conversational speech analyzer which analyzes whether utterances in a meeting are of interest or concern. Frames are calculated using sound signals obtained from a microphone and a sensor, sensor signals are cut out for each frame, and by calculating the correlation between sensor signals for each frame, an interest level which represents the concern of an audience regarding utterances is calculated, and the meeting is analyzed.
19 Citations
16 Claims
-
1. A conversational speech analyzing system comprising:
-
a first microphone and a second microphone, each configured to capture speech data in an area where a meeting is being held; a first sensor and a second sensor, each configured to capture sensor information in the area where the meeting is being held; and a computer connected to the first and second microphones and the first and second sensors; wherein the first microphone and the first sensor are connected to, or in proximity to, a first person, and the second microphone and the second sensor are connected to, or in proximity to, a second person; wherein the computer is configured to store first speech data captured by the first microphone, second speech data captured by the second microphone, first sensor information captured by the first sensor, and second sensor information captured by the second sensor; wherein the computer is configured to classify the first speech data captured from the first microphone as first speech frames when speech is detected, and as first nonspeech frames when speech is not detected; wherein the computer is configured to divide the second sensor information based on the first speech frames and the first nonspeech frames, and wherein the computer is configured to evaluate an interest level of the second person in the meeting by comparing characteristics of the second sensor information divided based on the first speech frames to characteristics of the second sensor information divided based on the first nonspeech frames. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A conversational speech analysis method in a conversational speech analyzing system having a first microphone, a second microphone, a first sensor, a second sensor, and a computer connected to the first microphone, the second microphone, the first sensor, and the second sensor, the method comprising:
-
a first step, including using the first microphone and the second microphone to capture speech data in a vicinity of a meeting, and storing the speech data in the memory of the computer; a second step, including using the first sensor to capture first sensor information in the vicinity of the meeting, and using the second sensor to capture second sensor information in the vicinity of the meeting, and to store the first and second sensor information in the memory of a computer; and a third step, including using the computer to classify the speech data captured from the first microphone as first speech frames when speech is detected, and to classify the speech data captured from the first microphone as first nonspeech frames when speech is not detected; a fourth step, including using the computer to divide the first sensor information based on the first speech frames and the first nonspeech frames, and to divide the second sensor information also based on the first speech frames and the first nonspeech frames; and a fifth step, including using the computer to evaluate an interest level of a person in the meeting by comparing characteristics of the second sensor information divided based on the first speech frames to characteristics of the second sensor information divided based on the first nonspeech frames. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A conversational speech analyzing system comprising:
-
a first microphone and a second microphone, each configured to capture speech data in an area where a meeting is being held, the first microphone connected to, or in proximity to, a first person, and the second microphone connected to, or in proximity to, a second person; a first sensor and a second sensor, each configured to capture sensor information in the area where the meeting is being held, the first sensor connected to, or in proximity to, a first person, and the second sensor connected to, or in proximity to, a second person; and a computer, configured to; connect to the first and second microphones and the first and second sensors, store first speech data captured by the first microphone, second speech data captured by the second microphone, first sensor information captured by the first sensor, and second sensor information captured by the second sensor; classify the first speech data captured from the first microphone as first speech frames when speech is detected, and as first nonspeech frames when speech is not detected, divide the second sensor information based on the first speech frames and the first nonspeech frames, and evaluate an interest level of the second person in the meeting by comparing characteristics of the second sensor information divided based on the first speech frames to characteristics of the second sensor information divided based on the first nonspeech frames. - View Dependent Claims (16)
-
Specification