System for load balancing between message processors by routing all queued messages to a particular processor selected by a deterministic rule
First Claim
1. A service controller comprising:
- a plurality of front end communication processors each connected to a signalling link in each of two linksets;
at least two independent control processors each connected to each of said front end processors with each said control processor comprising;
a memory for queuing outgoing messages addressed to a signalling link;
counter means for keeping track of the number of messages in queue;
timer means for keeping track of the time messages are in queue; and
message distribution means for using a deterministic rule for routing the messages in said memory to a one of said front end communication processors to control the load on the front and processors as identified by said rule when one of said counter means or said timer means reaches its predetermined threshold value.
11 Assignments
0 Petitions
Accused Products
Abstract
A service control point having multiple independent control processor and multiple front end communication processors incorporates a deterministic rule message distribution process for balancing the traffic load among the front end processors from all the independent control processors. The deterministic rule method is applied independently in each control processor and queues all outgoing messages in one of three queues; those that must be transmitted on Linkset-0, those that must be transmitted on Linkset-1, and those that can be transmitted on either linkset. Our message distribution process keeps all messages in queue until message number or timer threshold is reached. The traffic is routed to a front end processor such that the resulting distribution is closest to the expected distribution.
-
Citations
13 Claims
-
1. A service controller comprising:
-
a plurality of front end communication processors each connected to a signalling link in each of two linksets; at least two independent control processors each connected to each of said front end processors with each said control processor comprising; a memory for queuing outgoing messages addressed to a signalling link; counter means for keeping track of the number of messages in queue; timer means for keeping track of the time messages are in queue; and message distribution means for using a deterministic rule for routing the messages in said memory to a one of said front end communication processors to control the load on the front and processors as identified by said rule when one of said counter means or said timer means reaches its predetermined threshold value. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method for deterministically routing traffic in a service controller from each of a plurality of independent control processors to each of a plurality of front end processors, said method comprising the steps of:
-
queuing outgoing messages from said control processor; counting the number of messages in queue; timing how long the queue contains messages; tracking the total number of messages sent to each of said front end communication processors; storing the expected number of messages to be sent to each of said front end communication processors; determining the difference between the total of the actual traffic sent to each of said front end communication processors and said messages in queue compared to the expected traffic for each of said front end processors; and sending all the messages in queue to said front end processor with the lowest difference in said determining step when the value determined in said counting step or when the value determined in said timing step equals its predetermined threshold. - View Dependent Claims (8, 9, 10, 11, 12)
-
-
13. A method for minimizing traffic congestion from multiple independent control processors routing incoming message traffic arriving randomly to multiple front end communication processors for distribution as outgoing messages to alternate linksets, said method comprising:
-
queuing said outgoing messages in one of three queues depending on whether such outgoing messages are to be distributed over one or the other of said linksets or may be distributed over either of said linksets; counting the number of messages in each queue; timing how long said queues contain messages, and upon either said counting step or said timing step reaching its predetermined threshold value transmitting said blocks of messages to the one front end communication processor which has the lowest difference of messages sent and in queue to the number of messages expected to be received at said one front end communication processor.
-
Specification