Resource affinity via dynamic reconfiguration for multi-queue network adapters
First Claim
1. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:
- allocate an initial queue pair within a memory, wherein each queue pair is a transmit/receive queue pair;
determine whether workload of the computing device has risen above a predetermined high threshold;
responsive to the workload rising above the predetermined high threshold;
allocate and initialize an additional queue pair in the memory;
program a receive side scaling (RSS) mechanism in a network adapter to allow for dynamic insertion of an additional processing engine associated with the additional queue pair; and
enable transmit tuple hashing to the additional queue pair;
wherein the computer readable program further causes the computing device to;
determine whether the workload has fallen below a predetermined low threshold;
responsive to the workload falling below the predetermined low threshold, determine whether there is only one queue pair remaining allocated in the memory; and
responsive to more than one queue pair remaining allocated in the memory;
reprogram the RSS mechanism in the network adapter to allow for deletion of an allocated queue pair;
disable transmit tuple hashing to an identified queue pair;
determine whether the workload to the identified queue pair has quiesced; and
responsive to the workload to the identified queue pair quiescing, remove the identified queue pair from memory, thereby freeing up memory used by the identified queue pair.
1 Assignment
0 Petitions
Accused Products
Abstract
A mechanism is provided for providing resource affinity for multi-queue network adapters via dynamic reconfiguration. A device driver allocates an initial queue pair within a memory. The device driver determines whether workload of the data processing system has risen above a predetermined high threshold. Responsive to the workload rising above the predetermined high threshold, the device driver allocates and initializes an additional queue pair in the memory. The device driver programs a receive side scaling (RSS) mechanism in a network adapter to allow for dynamic insertion of an additional processing engine associated with the additional queue pair. The device driver enables transmit tuple hashing to the additional queue pair.
-
Citations
12 Claims
-
1. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:
-
allocate an initial queue pair within a memory, wherein each queue pair is a transmit/receive queue pair; determine whether workload of the computing device has risen above a predetermined high threshold; responsive to the workload rising above the predetermined high threshold; allocate and initialize an additional queue pair in the memory; program a receive side scaling (RSS) mechanism in a network adapter to allow for dynamic insertion of an additional processing engine associated with the additional queue pair; and enable transmit tuple hashing to the additional queue pair;
wherein the computer readable program further causes the computing device to;determine whether the workload has fallen below a predetermined low threshold; responsive to the workload falling below the predetermined low threshold, determine whether there is only one queue pair remaining allocated in the memory; and responsive to more than one queue pair remaining allocated in the memory; reprogram the RSS mechanism in the network adapter to allow for deletion of an allocated queue pair; disable transmit tuple hashing to an identified queue pair; determine whether the workload to the identified queue pair has quiesced; and responsive to the workload to the identified queue pair quiescing, remove the identified queue pair from memory, thereby freeing up memory used by the identified queue pair. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. An apparatus, comprising:
-
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to; allocate an initial queue pair within a memory, wherein each queue pair is a transmit/receive queue pair; determine whether workload of the apparatus has risen above a predetermined high threshold; responsive to the workload rising above the predetermined high threshold; allocate and initialize an additional queue pair in the memory; program a receive side scaling (RSS) mechanism in a network adapter to allow for dynamic insertion of an additional processing engine associated with the additional queue pair; and enable transmit tuple hashing to the additional queue pair;
wherein the instructions further cause the processor to;determine whether the workload has fallen below a predetermined low threshold; responsive to the workload falling below the predetermined low threshold, determine whether there is only one queue pair remaining allocated in the memory; and responsive to more than one queue pair remaining allocated in the memory; reprogram the RSS mechanism in the network adapter to allow for deletion of an allocated queue pair; disable transmit tuple hashing to an identified queue pair; determine whether the workload to the identified queue pair has quiesced; and responsive to the workload to the identified queue pair quiescing, remove the identified queue pair from memory, thereby freeing up memory used by the identified queue pair. - View Dependent Claims (8, 9, 10, 11, 12)
-
Specification