Multi-autonomous system anycast content delivery network
First Claim
1. A content delivery network comprising:
- a first set of cache servers hosted by a first autonomous system and a second set of cache servers hosted by a second autonomous system, wherein the first autonomous system balances a first load among the first set of cache servers by controlling routing within the first autonomous system and the second autonomous system balances a second load among the second set of cache servers by controlling routing within the second autonomous system, wherein each of the cache servers performs operations comprising;
responding to an anycast address for the content delivery network; and
receiving a request for content from a client system and providing the content to the client system;
a domain name server that performs operations comprising;
receiving a request for a cache server address; and
providing the anycast address in response to the request for the cache server address; and
an anycast island controller separate from the first autonomous system, which performs operations comprising;
receiving load information from the first set and the second set of cache servers;
generating an island topology for an anycast island serviced by the anycast island controller, wherein the island topology includes weights for each node in the first autonomous system, wherein the weights are based on a difference between a present demand for the first autonomous system and a first aggregate capacity of the first set of cache servers;
identifying, based on the island topology, when a first aggregate load for the first autonomous system exceeds the first aggregate capacity of the first set of cache servers and a second aggregate load for the second autonomous system is below a second aggregate capacity of the second set of cache servers;
determining an amount of requests for content to transfer from the first autonomous system to the second autonomous system in response to the identifying;
preventing a transient loop from forming in the first autonomous system and the second autonomous system prior to sending an instruction to the first autonomous system to control the routing of the anycast address to transfer the amount of requests for content to the second autonomous system, wherein the transient loop is prevented, at least in part, by waiting for devices shifting traffic associated with the amount of requests to stop shifting the traffic; and
sending the instruction to the first autonomous system to control the routing of the anycast address to transfer the amount of requests for content to the second autonomous system after the transient loop is prevented.
1 Assignment
0 Petitions
Accused Products
Abstract
A content delivery network includes first and second sets of cache servers, a domain name server, and an anycast island controller. The first set of cache servers is hosted by a first autonomous system and the second set of cache servers is hosted by a second autonomous system. The cache servers are configured to respond to an anycast address for the content delivery network, to receive a request for content from a client system, and provide the content to the client system. The first and second autonomous systems are configured to balance the load across the first and second sets of cache servers, respectively. The domain name server is configured to receive a request from a requestor for a cache server address, and provide the anycast address to the requestor in response to the request. The anycast island controller is configured to receive load information from each of the cache servers, determine an amount of requests to transfer from the first autonomous system to the second autonomous system; send an instruction to the first autonomous system to transfer the amount of requests to the second autonomous system.
-
Citations
20 Claims
-
1. A content delivery network comprising:
-
a first set of cache servers hosted by a first autonomous system and a second set of cache servers hosted by a second autonomous system, wherein the first autonomous system balances a first load among the first set of cache servers by controlling routing within the first autonomous system and the second autonomous system balances a second load among the second set of cache servers by controlling routing within the second autonomous system, wherein each of the cache servers performs operations comprising; responding to an anycast address for the content delivery network; and receiving a request for content from a client system and providing the content to the client system; a domain name server that performs operations comprising; receiving a request for a cache server address; and providing the anycast address in response to the request for the cache server address; and an anycast island controller separate from the first autonomous system, which performs operations comprising; receiving load information from the first set and the second set of cache servers; generating an island topology for an anycast island serviced by the anycast island controller, wherein the island topology includes weights for each node in the first autonomous system, wherein the weights are based on a difference between a present demand for the first autonomous system and a first aggregate capacity of the first set of cache servers; identifying, based on the island topology, when a first aggregate load for the first autonomous system exceeds the first aggregate capacity of the first set of cache servers and a second aggregate load for the second autonomous system is below a second aggregate capacity of the second set of cache servers; determining an amount of requests for content to transfer from the first autonomous system to the second autonomous system in response to the identifying; preventing a transient loop from forming in the first autonomous system and the second autonomous system prior to sending an instruction to the first autonomous system to control the routing of the anycast address to transfer the amount of requests for content to the second autonomous system, wherein the transient loop is prevented, at least in part, by waiting for devices shifting traffic associated with the amount of requests to stop shifting the traffic; and sending the instruction to the first autonomous system to control the routing of the anycast address to transfer the amount of requests for content to the second autonomous system after the transient loop is prevented. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 20)
-
-
10. An anycast island controller comprising:
-
a memory that stores instructions; a processor that executes the instructions to performing operations comprising; receiving load information from a first set of cache servers hosted by a first autonomous system and a second set of cache servers hosted by a second autonomous system, the first autonomous system configured to balance a first load among the first set of cache servers and the second autonomous systems configured to balance a second load among the second set of cache servers; generating an island topology for an anycast island serviced by the anycast island controller, wherein the island topology includes weights for each node in the first autonomous system, wherein the weights are based on a difference between a present demand for the first autonomous system and a first aggregate capacity of the first set of cache servers; identifying, based on the island topology, when a first aggregate load for the first autonomous system exceeds the first aggregate capacity of the first set of cache servers and a second aggregate load for the second autonomous system is below a second aggregate capacity of the second set of cache servers; determining an amount of requests to transfer from the first autonomous system to the second autonomous system in response to the identifying; preventing a transient loop from forming in the first autonomous system and the second autonomous system prior to sending an instruction to the first autonomous system to control the routing of the anycast address to transfer the amount of requests for content to the second autonomous system, wherein the transient loop is prevented, at least in part, by waiting for devices shifting traffic associated with the amount of requests to stop shifting the traffic; and sending the instruction to the first autonomous system to control the routing of the anycast address to transfer the amount of requests to the second autonomous system after the transient loop is prevented. - View Dependent Claims (11, 12, 13, 14)
-
-
15. A computer readable device comprising a plurality of instructions to manipulate a processor to cause the processor to perform operations comprising:
-
receiving load information from a first set of cache servers hosted by a first autonomous system and a second set of cache servers hosted by a second autonomous system, the first autonomous system configured to balance a first load among the first set of cache servers and the second autonomous systems configured to balance a second load among the second set of cache servers; generating an island topology for an anycast island associated with the first and second autonomous systems, wherein the island topology includes weights for each node in the first autonomous system, wherein the weights are based on a difference between a present demand for the first autonomous system and a first aggregate capacity of the first set of cache servers; identifying, based on the island topology, when a first aggregate load for the first autonomous system exceeds the first aggregate capacity of the first set of cache servers and a second aggregate load for the second autonomous system is below a second aggregate capacity of the second set of cache servers; determining an amount of requests to transfer from the first autonomous system to the second autonomous system when the first aggregate load exceeds the first aggregate capacity and the second aggregate load is below the second aggregate capacity; preventing a transient loop from forming in the first autonomous system and the second autonomous system prior to sending an instruction to the first autonomous system to control the routing of the anycast address to transfer the amount of requests for content to the second autonomous system, wherein the transient loop is prevented, at least in part, by waiting for devices shifting traffic associated with the amount of requests to stop shifting the traffic; and sending the instruction to the first autonomous system to controlling the routing of the anycast address to transfer the amount of requests to the second autonomous system after the transient loop is prevented. - View Dependent Claims (16, 17, 18, 19)
-
Specification