×

Conversation control apparatus, conversation control method, and programs therefor

  • US 7,676,369 B2
  • Filed: 11/19/2004
  • Issued: 03/09/2010
  • Est. Priority Date: 11/20/2003
  • Status: Active Grant
First Claim
Patent Images

1. A conversation control apparatus, comprising:

  • (a) a conversation database having stored therein;

    a plurality of topic specifying information items;

    a plurality of topic titles including sub-pluralities respectively correlated to correspond to respective ones of said topic specifying information items;

    a plurality of reply sentences including sub-pluralities each respectively correlated to correspond to a respective one of said topic titles; and

    a plurality of event information flags each corresponding to an emotion and including sub-pluralities each correlated to correspond to a respective one of said reply sentences;

    (b) a voice input unit configured to receive speech input of a user;

    (c) a sensor unit configured to acquire facial image data of the user;

    (d) an emotion estimation module configured to estimate a current emotion of the user, based upon a characteristic quantity of an expression computed from the facial image data of the user acquired by the sensor unit, and to generate event information indicative of a result of the estimate;

    (e) a past conversation information storage unit storing a plurality of past conversation information items determined based upon a past speech by the user and a past reply sentence in response to the past speech, the past reply sentence having been output by the conversation control apparatus;

    (f) an output unit configured to output sentences; and

    (g) a conversation control unit, the conversation control unit being configured to execute the following operations;

    (i) accept the speech input received by the voice input unit from the user as current conversation information and store the current conversation information for future use as the past conversation information of the user in the past conversation information storage unit;

    (ii) acquire the facial image data of the user, who uttered the speech input, and generate by the emotion estimation module, the event information used for estimating the current emotion of the user, based upon the acquired facial image data of the user;

    (iii) extract a relevant conversation information item, from among the plurality of the past conversation information items stored in the past conversation information storage unit, based upon the current conversation information of the user accepted in operation (i);

    (iv) extract a relevant topic specifying information item, from among the plurality of the topic specifying information items stored in the conversation database unit, based upon the relevant conversation information item extracted in the operation (iii);

    (v) extract a relevant topic title, from among the plurality of the topic titles determined as relevant based on corresponding to the relevant topic specifying information item extracted in the operation (iv) which was extracted based on the current conversation information of the user input in the operation (i), and also to select one of the sub-plurality of reply sentences by determining correlation thereof to the relevant topic title;

    (vi) extract a relevant event information flag, from among the sub-plurality of the event information flags correlated to the selected one of the sub-plurality of reply sentences correlated to the relevant topic tide extracted in the operation (v), based upon the event information indicative of the current emotion of the user and generated in the operation (ii) by the emotion estimation module;

    (vii) extract a relevant reply sentence from the sub-plurality of reply sentences correlated to the relevant topic title extracted in the operation (v), by determining the relevant reply sentence corresponds to the relevant event information flag extracted in the operation (vi), such that said relevant reply sentence is extracted based upon all of the following;

    the current conversation information of the user accepted in operation (i) being used to extract the relevant conversation information item which in turn is used to extract the relevant topic specifying information item which is then used to extract the relevant topic title which is then used to select the sub-plurality of reply sentences;

    the past speech by the user and the past reply sentence issued in response to the past speech being used to provide the past conversation information from which the relevant conversation information item is extracted; and

    outside information in the form of the facial image data of the user based upon which the event information is generated and used to extract the relevant reply sentence from the selected sub-plurality of reply sentences by confirming the event information flag of the reply sentence relates to the event information; and

    (viii) output the relevant reply sentence, extracted in the operation (vii), to the user.

View all claims
  • 3 Assignments
Timeline View
Assignment View
    ×
    ×