Load distribution in data networks
First Claim
Patent Images
1. A method for service load distribution in a data network, the method comprising:
- generating a service policy for distributing network service requests among a plurality of load balancing devices in the data network, wherein the plurality of load balancing devices includes a plurality of routers, a plurality of traffic classification engines, and a plurality of service nodes;
providing the service policy to the plurality of load balancing devices associated with the data network;
receiving, by the plurality of routers, one or more service requests;
distributing, by the plurality of routers, the one or more service requests evenly to one or more of the plurality of traffic classification engines;
distributing, by the one or more of the plurality of traffic classification engines, the one or more service requests asymmetrically to one or more of the plurality of service nodes according to the service policy; and
distributing, by the one or more of the plurality of service nodes, the one or more service requests to one or more backend servers according to the service policy, wherein the service policy is generated based on at least a responsiveness of each of the plurality of service nodes and reachability of the one or more backend servers to the one or more of the plurality of service nodes.
1 Assignment
0 Petitions
Accused Products
Abstract
Provided are methods and systems for load distribution in a data network. A method for load distribution in the data network may comprise retrieving network data associated with the data network and service node data associated with one or more service nodes. The method may further comprise analyzing the retrieved network data and service node data. Based on the analysis, a service policy may be generated. Upon receiving one or more service requests, the one or more service requests may be distributed among the service nodes according to the service policy.
404 Citations
36 Claims
-
1. A method for service load distribution in a data network, the method comprising:
-
generating a service policy for distributing network service requests among a plurality of load balancing devices in the data network, wherein the plurality of load balancing devices includes a plurality of routers, a plurality of traffic classification engines, and a plurality of service nodes; providing the service policy to the plurality of load balancing devices associated with the data network; receiving, by the plurality of routers, one or more service requests; distributing, by the plurality of routers, the one or more service requests evenly to one or more of the plurality of traffic classification engines; distributing, by the one or more of the plurality of traffic classification engines, the one or more service requests asymmetrically to one or more of the plurality of service nodes according to the service policy; and distributing, by the one or more of the plurality of service nodes, the one or more service requests to one or more backend servers according to the service policy, wherein the service policy is generated based on at least a responsiveness of each of the plurality of service nodes and reachability of the one or more backend servers to the one or more of the plurality of service nodes. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A system for service load distribution in a data network, the system comprising:
-
a cluster master that; retrieves network data associated with the data network; retrieves service node data associated with one or more service nodes; analyzes the network data and the service node data; based on the analysis, generates a service policy for distributing network service requests among a plurality of load balancing devices in the data network, wherein the plurality of load balancing devices includes a plurality of routers, a plurality of traffic classification engines, and a plurality of service nodes; and provides the service policy to the plurality of load balancing devices associated with the data network; the plurality of routers that; receive one or more service requests; and distribute the one or more service requests evenly to one or more of the plurality of traffic classification engines; the plurality of traffic classification engines, wherein at least the one or more of the plurality of traffic classification engines are configured to; receive the service policy; receive the one or more service requests from the plurality of routers; and distribute the one or more service requests asymmetrically to one or more of the plurality of service nodes according to the service policy; and the plurality of service nodes, wherein at least the one or more of the plurality of service nodes are configured to distribute the one or more service requests to one or more backend servers according to the service policy; wherein the service policy is generated based on at least a responsiveness of each of the plurality of service nodes and reachability of the one or more backend servers to the one or more of the plurality of service nodes. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35)
-
-
36. A non-transitory processor-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to perform the following operations:
-
retrieving network data associated with a data network; retrieving service node data associated with a plurality of service nodes; analyzing the network data and the service node data; based on the analyzed network data and service node data, generating a service policy for distributing network service requests among a plurality of load balancing devices in the data network, wherein the plurality of load balancing devices includes a plurality of routers, a plurality of traffic classification engines, and the plurality of service nodes; providing the service policy to plurality of load balancing devices associated with the data network; receiving, by the plurality of routers, one or more service requests; distributing, by the plurality of routers, the one or more service requests evenly to one or more of the plurality of traffic classification engines; distributing, by the one or more of the plurality of traffic classification engines, the one or more service requests asymmetrically to one or more of the plurality of service nodes according to the service policy; distributing, by the one or more of the plurality of service nodes, the one or more service requests to one or more backend servers according to the service policy, wherein the service policy is generated based on at least a responsiveness of each of the plurality of service nodes and reachability of the one or more backend servers to the one or more of the plurality of service nodes; developing a first further service policy based on the analysis, wherein the first further service policy is associated with scaling up, scaling down, remedying, or removing services associated with the plurality of service nodes, and introducing a new service associated with the plurality of service nodes; facilitating providing an application programmable interface to a network administrator; developing a second further service policy based on the analysis by the network administrator via the application programmable interface; performing a health check of the one or more backend servers by the plurality of load balancing devices associated with the data network; scaling up or scaling down at least one of the plurality of service nodes, the one or more backend servers, the plurality of traffic classification engines, and cluster masters while reducing disruption to traffic flow; scaling up or scaling down services while reducing disruption to the traffic flow; facilitating reverse traffic from the one or more backend servers to the one or more of the plurality of service nodes; and redirecting the one or more service requests to the one or more of the plurality of service nodes to continue processing data associated with the one or more service requests when at least one service node of the plurality of service nodes has been scaled up or down.
-
Specification