Engine near cache for reducing latency in a telecommunications environment
First Claim
1. A computer implemented method for providing a near engine cache in a network environment, comprising:
- maintaining an engine tier distributed over a cluster network, said engine tier including one or more engine nodes that process one or more messages;
maintaining a state tier distributed over the cluster network, said state tier including one or more replicas that store state data associated with the messages;
continuously receiving the messages by the engine nodes and processing the messages by reading the state data from the replicas to the engine nodes and modifying the state data in the state tier;
storing a local copy of a portion of the state data onto a near cache residing on the one or more engine nodes after processing the messages in the engine tier;
receiving a message by an engine node after storing the local copy in the near cache;
determining, by said engine node upon receiving the message, whether the local copy of the portion of the state data associated with the message in the near cache on said engine node is current with respect to the state tier; and
wherein if the local copy in the near cache is determined to be current, then the engine node locks the state data in the state tier and uses the local copy of the state data from the near cache to process the message and wherein if the local copy in the near cache is determined to be stale, the engine node locks the state data in the state tier and retrieves the state data from the state tier, and wherein if the state data is retrieved from the state tier, then said state data is deserialized prior to using the state data to process the message at the engine node;
wherein retrieving the state data from the state tier further comprises;
serializing the state data and transporting it to the engine tier; and
deserializing the state data in the engine tier; and
wherein using the local copy from the near cache is performed without serializing and deserializing the local copy of the portion of the state data; and
adjusting the size of the near cache in order to achieve a balance between latency introduced by garbage collection and latency reduced by elimination of serializing and deserializing the state data.
2 Assignments
0 Petitions
Accused Products
Abstract
The SIP server can be comprised of an engine tier and a state tier distributed on a cluster network environment. The engine tier can send, receive and process various messages. The state tier can maintain in-memory state data associated with various SIP sessions. A near cache can be residing on the engine tier in order to maintain a local copy of a portion of the state data contained in the state tier. Various engines in the engine tier can determine whether the near cache contains a current version of the state needed to process a message before retrieving the state data from the state tier. Accessing the state from the near cache can save on various latency costs such as serialization, transport and deserialization of state to and from the state tier. Furthermore, the near cache and JVM can be tuned to further improve performance of the SIP server.
-
Citations
14 Claims
-
1. A computer implemented method for providing a near engine cache in a network environment, comprising:
-
maintaining an engine tier distributed over a cluster network, said engine tier including one or more engine nodes that process one or more messages; maintaining a state tier distributed over the cluster network, said state tier including one or more replicas that store state data associated with the messages; continuously receiving the messages by the engine nodes and processing the messages by reading the state data from the replicas to the engine nodes and modifying the state data in the state tier; storing a local copy of a portion of the state data onto a near cache residing on the one or more engine nodes after processing the messages in the engine tier; receiving a message by an engine node after storing the local copy in the near cache; determining, by said engine node upon receiving the message, whether the local copy of the portion of the state data associated with the message in the near cache on said engine node is current with respect to the state tier; and wherein if the local copy in the near cache is determined to be current, then the engine node locks the state data in the state tier and uses the local copy of the state data from the near cache to process the message and wherein if the local copy in the near cache is determined to be stale, the engine node locks the state data in the state tier and retrieves the state data from the state tier, and wherein if the state data is retrieved from the state tier, then said state data is deserialized prior to using the state data to process the message at the engine node; wherein retrieving the state data from the state tier further comprises; serializing the state data and transporting it to the engine tier; and deserializing the state data in the engine tier; and wherein using the local copy from the near cache is performed without serializing and deserializing the local copy of the portion of the state data; and adjusting the size of the near cache in order to achieve a balance between latency introduced by garbage collection and latency reduced by elimination of serializing and deserializing the state data. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system for providing a near cache for a network environment, said system comprising:
-
one or more processors and a set of instructions, said one or more processors executing the set of instructions to implement; an engine tier distributed on a cluster network, said engine tier including one or more engine nodes that receive and process messages; a state tier distributed on the cluster network, said state tier including one or more replicas that maintain state data associated with the messages such that the one or more engine nodes of the engine tier retrieve the state data maintained in the state tier in order to process the messages; and a near cache residing on the one or more engine nodes in the engine tier, said near cache storing a local copy of a portion of the state data maintained in the state tier such that the state data from the near cache is accessible to the one or more engine nodes; wherein the engine nodes continuously receive the messages and processing said messages by reading the state data from the replicas to the engine nodes and modifying the state data in the state tier; wherein upon receiving the messages, the one or more engine nodes determine whether the local copy of the portion of the state data stored in the near cache on said engine node is current with respect to the state tier; and wherein if the local copy in the near cache is determined to be current, then the one or more engine nodes lock the state data in the state tier and employ the local copy of the state data from the near cache to process the message and wherein if the local copy in the near cache is determined to be stale, the engine node locks the state data in the state tier and retrieves the state data from the state tier, and wherein if the state data is retrieved from the state tier, then said state data is deserialized prior to using the state data to process the message at the engine node; wherein retrieving the state data from the state tier further comprises;
serializing the state data and transporting the state data to the engine tier; anddeserializing the state data in the engine tier; and wherein employing the local copy from the near cache is performed without serializing and deserializing the local copy of the portion of the state data; and wherein the size of the near cache is adjusted in order to achieve a balance between latency introduced by garbage collection and latency reduced by elimination of serializing and deserializing the state data. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. A non-transitory computer readable storage medium having instructions stored thereon which when executed by one or more processors cause the one or more processors to:
-
maintain an engine tier distributed over a cluster network, said engine tier including one or more engine nodes that process one or more messages; maintain a state tier distributed over the cluster network, said state tier including one or more replicas that store state data associated with the messages; continuously receive the messages by the engine nodes and process the messages by reading the state data from the replicas to the engine nodes and modifying the state data in the state tier; store a local copy of a portion of the state data onto a near cache residing on the one or more engine nodes after processing the messages in the engine tier; receive a message by an engine node after storing the local copy in the near cache; determine, by said engine node upon receiving the message, whether the local copy of the portion of the state data associated with the message in the near cache on said engine node is current with respect to the state tier; and wherein if the local copy in the near cache is determined to be current, then the engine node locks the state data in the state tier and uses the local copy of the state data from the near cache to process the message and wherein if the local copy in the near cache is determined to be stale, the engine node locks the state data in the state tier and retrieves the state data from the state tier, and wherein if the state data is retrieved from the state tier, then said state data is deserialized prior to using the state data to process the message at the engine node; wherein retrieving the state data from the state tier further comprises;
serializing the state data and transporting the state data to the engine tier; anddeserializing the state data in the engine tier; and wherein employing the local copy from the near cache is performed without serializing and deserializing the local copy of the portion of the state data; and wherein the size of the near cache is adjusted in order to achieve a balance between latency introduced by garbage collection and latency reduced by elimination of serializing and deserializing the state data.
-
Specification