System and method for visually identifying speaking participants in a multi-participant networked event
First Claim
1. A computer product for use in conjunction with a computer system, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism comprising:
- (i) a participant data structure comprising a plurality of participant records, each participant record associated with a different participant in a multi-participant event;
(ii) an application module for providing a user interface to said multi-participant event;
(iii) a sound control module for receiving a plurality of packets from a network connection, each said packet associated with a participant in said multi-participant event and including digitized speech from said participant, said sound controller comprising;
a plurality of buffers, each buffer including instructions for managing a subset of said packets;
a packet controller that includes instructions for determining said participant associated with said packet and instructions for routing said packet to a buffer, wherein when said packet is routed to said buffer, said packet is managed by said buffer; and
a visual identification module that includes instructions for visually identifying said participant in said multi-participant event and a characteristic associated with said participant; and
(iv) a sound mixer that includes instructions for mixing digitized speech from at least one of said buffers to produce a signal that is presented to an output device.
7 Assignments
0 Petitions
Accused Products
Abstract
A method of visually identifying speaking participants in a multi-participant event such as an audio conference or an on-line game includes the step of receiving packets of digitized sound from a network connection. The identity of the participant associated with each packet is used to route the packet to a channel buffer or an overflow buffer. Each channel buffer may be assigned to a single participant in the multi-participant. A visual identifier module updates the visual identifier associated with participants that have been assigned a channel buffer. In some embodiments, the appearance of the visual identifier associated with the participant is dependent upon the differential of an acoustic parameter derived from content in the associated buffer channel and a reference value stored in a participant record.
-
Citations
25 Claims
-
1. A computer product for use in conjunction with a computer system, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism comprising:
-
(i) a participant data structure comprising a plurality of participant records, each participant record associated with a different participant in a multi-participant event;
(ii) an application module for providing a user interface to said multi-participant event;
(iii) a sound control module for receiving a plurality of packets from a network connection, each said packet associated with a participant in said multi-participant event and including digitized speech from said participant, said sound controller comprising;
a plurality of buffers, each buffer including instructions for managing a subset of said packets;
a packet controller that includes instructions for determining said participant associated with said packet and instructions for routing said packet to a buffer, wherein when said packet is routed to said buffer, said packet is managed by said buffer; and
a visual identification module that includes instructions for visually identifying said participant in said multi-participant event and a characteristic associated with said participant; and
(iv) a sound mixer that includes instructions for mixing digitized speech from at least one of said buffers to produce a signal that is presented to an output device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
when said participant is speaking in said multi-participant event, said visual identifier is set to a first state; and
when said participant is not speaking in said multi-participant event, said visual identifier is set to a second state.
-
-
4. The computer product of claim 3 wherein said application module includes instructions for displaying said visual identifier on an output device based upon said characteristic associated with said participant.
-
5. The computer product of claim 2 wherein said application module includes instructions for displaying said visual identifier on an output device, wherein:
-
when said participant associated with said visual identifier is speaking in said multi-participant event said unique visual identifier is displayed in a first state; and
when said participant associated with said visual identifier is not speaking in said multi-participant event said visual identifier is displayed in a second state.
-
-
6. The computer product of claim 2 wherein:
-
(i) said participant record further includes a reference speech amplitude associated with said participant; and
(ii) said visual identification module further includes;
instructions for determining a buffered speech amplitude based upon a characteristic of digitized speech in at least one packet, associated with said participant, that is managed by a buffer;
instructions for computing a speech amplitude differential based on said buffered speech amplitude and said reference speech amplitude;
instructions for updating said visual identifier associated with said participant based on said speech amplitude differential; and
instructions for storing said buffered speech amplitude as said reference speech amplitude in said participant record.
-
-
7. The computer product of claim 6 wherein said application module includes instructions for displaying said visual identifier on an output device based on a function of said visual identifier state in said participant record;
- and
said instructions for updating said visual identifier associated with said participant based on said speech amplitude differential includes instructions for updating said visual identifier state based upon a value of said speech amplitude differential.
- and
-
8. The computer product of claim 1 wherein said multi-participant event is selected from the group consisting of an audio conference and an on-line game.
-
9. The computer product of claim 1 wherein:
-
said plurality of buffers includes an overflow buffer and a plurality of channel buffers, wherein;
(i) when a packet is present in a channel buffer, said channel buffer is characterized by an identity of said packet; and
(ii) when no packet is present in a channel buffer said channel buffer is available; and
said instructions for routing said packet to said buffer includes instructions for comparing an identity of the participant associated with said packet with said identity that characterizes said channel buffer, wherein;
(a) when said identity of the participant associated with said packet matches said identity characterizing said channel buffer, said packet is routed to said channel buffer; and
(b) when said identity of the participant associated with said packet does not match said identity that characterizes a channel buffer, said packet is routed to an available channel buffer, and when no channel buffer in said plurality of channel buffers is available, said packet is routed to said overflow buffer.
-
-
10. The computer product of claim 1 wherein said instructions for managing said subset of said packets is first in first out.
-
11. The computer product of claim 2 wherein said participant source identifier is a temporary unique number assigned to said participant for the duration of said multi-participant event.
-
12. The computer product of claim 2 wherein said packet comprises a packet header and a formatted payload and said formatted payload includes said participant source identifier, a packet data size, and said digitized speech from said participant.
-
13. The computer product of claim 9 wherein said sound mixer further includes:
-
instructions for retrieving a portion of said digitized speech from a first packet in each said channel buffer; and
instructions for combining each said portion of said digitized speech into said mixed digitized signal.
-
-
14. The computer product of claim 13 wherein said portion of said digitized speech is ten milliseconds.
-
15. The computer product of claim 1 wherein said characteristic associated with said participant is selected from the group consisting of (i) whether said participant is associated with a channel buffer, (ii) whether said participant has moderation privileges, (iii) whether said participant has placed said multi-participant event on hold and (iv) whether said participant has specified that he is away from the keyboard.
-
16. A method for visually identifying speaking participants in a multi-participant event, said method comprising the steps of:
-
receiving a packet from a remote source;
determining an identity associated with said packet;
comparing said identity of said packet with an identity associated with a channel buffer selected from a plurality of channel buffers;
wherein, said identity associated with said channel buffer is determined by an identity of a packet stored by said channel buffer when said channel buffer is storing a packet, and said channel buffer is available when no packet is stored by said channel buffer;
routing said packet to;
(i) a channel buffer when said identity of said packet matches said identity associated with said channel buffer;
(ii) an available channel buffer when said identity of said packet does not match an identity of a channel buffer; and
(iii) an overflow buffer when said identity of said packet does not match an identity of a channel buffer and there is no available channel buffer; and
associating a different visual identifier with each participant in said multi-participant event;
displaying each said different visual identifier on an output device;
wherein said different visual identifier is determined by a characteristic associated with said participant.- View Dependent Claims (17, 18, 19, 20, 21, 22, 23, 24, 25)
updating a visual identifier state in a participant record, said visual identifier state determined by whether an identity of a participant corresponding to said participant record matches an identity associated with a channel buffer.
-
-
18. The method of claim 17 wherein said updating step further comprises:
-
determining a difference between a characteristic of said packet and a reference characteristic stored in said participant record; and
setting said visual identifier state based upon said difference.
-
-
19. The method of claim 16 wherein said multi-participant event includes a local participant and at least one remote participant, said method further comprising the steps of:
-
accepting a frame of sound from an input device;
deriving acoustic parameters from the content of said frame of sound;
performing an acoustic function using said acoustic parameters to determine whether said frame of sound includes speech from said local participant;
updating a visual identifier state in a participant record associated with said local participant, said visual identifier state determined by whether said frame of sound includes speech from said local participant.
-
-
20. The method of claim 16 wherein said multi-participant event is selected from the group consisting of an audio conference and an on-line game.
-
21. The method of claim 16 further including the step of assigning a temporary number to a participant for the duration of said multi-participant event;
- wherein said temporary number provides an identity to said participant.
-
22. The method of claim 16 further including the steps of:
-
mixing sound from each channel buffer; and
presenting said mixed sound to an output device.
-
-
23. The method of claim 22 wherein said mixing step further includes the steps of:
-
retrieving a portion of digitized speech from a first packet in each said channel buffer; and
combining each said portion of said digitized speech into a mixed digitized signal.
-
-
24. The method of claim 23 wherein said portion of said digitized speech is ten milliseconds.
-
25. The method of claim 16 wherein said characteristic associated with said participant is selected from the group consisting of (i) whether said participant is associated with a channel buffer, (ii) whether said participant has moderation privileges, (iii) whether said participant has placed said multi-participant event on hold and (iv) whether said participant has specified that he is away from the keyboard.
Specification