Servicing interrupts and scheduling code thread execution in a multi-CPU network file server
First Claim
1. A network file server comprising:
- a data processor;
a disk storage array storing data;
network adapters for linking the data processor to a data network for exchange of data packets between the data processor and clients in the data network; and
storage adapters linking the data processor to the disk storage array for exchange of data blocks between the data processor and the disk storage array;
wherein the data processor includes at least eight core central processing units (CPUs), and shared memory shared among the core CPUs and containing programs executable by the core CPUs;
wherein the programs executable by the core CPUs include a real-time scheduler for scheduling execution of real-time and general purpose threads, and a thread manager for managing execution of hard affinity threads and soft affinity threads of the general purpose threads, each of the hard affinity threads being executed exclusively by a respective one of the core CPUs, and the thread manager distributing execution of the soft affinity threads among the core CPUs for load balancing;
wherein the programs executable by the core CPUs further include;
a network adapter interrupt routine for responding to interrupts from the network adapters when the network adapters receive data packets from the data network;
a network stack for transmission of data through the network adapters between the data processor and the data network in accordance with a network data transmission protocol;
a file system stack for providing clients in the data network with access to the data storage array in accordance with a file system access protocol and for maintaining an in-core file system cache in the shared memory;
a storage access driver for accessing the data storage array in accordance with a storage access protocol; and
a disk adapter interrupt routine for responding to interrupts from the disk adapters when the disk adapters receive data blocks from the disk storage array;
wherein threads of the network stack are incorporated into the real time threads that are scheduled by the real-time scheduler and executed exclusively by a plurality of the core CPUs that are not interrupted by the disk adapter interrupts so that the disk adapter interrupts do not interrupt execution of the network stack; and
wherein instances of the storage access driver are hard affinity threads; and
wherein the soft affinity threads include a multitude of instances of a thread of the file system stack for file access request processing so that file access request processing for a multitude of concurrent file access requests is load balanced over the core CPUs.
10 Assignments
0 Petitions
Accused Products
Abstract
Interrupts and code threads are assigned in a particular way to the core CPUs of a network file server in order to reduce latency for processing client requests for file access. Threads of the network stack are incorporated into real time threads that are scheduled by a real-time scheduler and executed exclusively by a plurality of the core CPUs that are not interrupted by disk adapter interrupts so that the disk adapter interrupts do not interrupt execution of the network stack. Instances of a storage access driver are hard affinity threads, and soft affinity threads include a multitude of instances of a thread of the file system stack for file access request processing so that file access request processing for a multitude of concurrent file access requests is load balanced over the core CPUs.
-
Citations
20 Claims
-
1. A network file server comprising:
-
a data processor; a disk storage array storing data; network adapters for linking the data processor to a data network for exchange of data packets between the data processor and clients in the data network; and storage adapters linking the data processor to the disk storage array for exchange of data blocks between the data processor and the disk storage array; wherein the data processor includes at least eight core central processing units (CPUs), and shared memory shared among the core CPUs and containing programs executable by the core CPUs; wherein the programs executable by the core CPUs include a real-time scheduler for scheduling execution of real-time and general purpose threads, and a thread manager for managing execution of hard affinity threads and soft affinity threads of the general purpose threads, each of the hard affinity threads being executed exclusively by a respective one of the core CPUs, and the thread manager distributing execution of the soft affinity threads among the core CPUs for load balancing; wherein the programs executable by the core CPUs further include; a network adapter interrupt routine for responding to interrupts from the network adapters when the network adapters receive data packets from the data network; a network stack for transmission of data through the network adapters between the data processor and the data network in accordance with a network data transmission protocol; a file system stack for providing clients in the data network with access to the data storage array in accordance with a file system access protocol and for maintaining an in-core file system cache in the shared memory; a storage access driver for accessing the data storage array in accordance with a storage access protocol; and a disk adapter interrupt routine for responding to interrupts from the disk adapters when the disk adapters receive data blocks from the disk storage array; wherein threads of the network stack are incorporated into the real time threads that are scheduled by the real-time scheduler and executed exclusively by a plurality of the core CPUs that are not interrupted by the disk adapter interrupts so that the disk adapter interrupts do not interrupt execution of the network stack; and wherein instances of the storage access driver are hard affinity threads; and wherein the soft affinity threads include a multitude of instances of a thread of the file system stack for file access request processing so that file access request processing for a multitude of concurrent file access requests is load balanced over the core CPUs. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A network file server comprising:
-
a data processor; a disk storage array storing data; network adapters linking the data processor to a data network for exchange of data packets between the data processor and clients in the data network; and storage adapters linking the data processor to the disk storage array for exchange of data blocks between the data processor and the disk storage array; wherein the data processor includes at least eight core central processing units (CPUs), and shared memory shared among the core CPUs and containing programs executed by the core CPUs; wherein the programs executable by the core CPUs include a real-time scheduler scheduling execution of real-time and general purpose threads, and a thread manager managing execution of hard affinity threads and soft affinity threads of the general purpose threads, each of the hard affinity threads being executed exclusively by a respective one of the core CPUs, and the thread manager distributing execution of the soft affinity threads among the core CPUs for load balancing; wherein the programs executed by the core CPUs further include; a network adapter interrupt routine responding to interrupts from the network adapters when the network adapters receive data packets from the data network; a network stack transmitting data through the network adapters between the data processor and the data network in accordance with a network data transmission protocol; a file system stack providing clients in the data network with access to the data storage array in accordance with a file system access protocol and maintaining an in-core file system cache in the shared memory; a storage access driver accessing the data storage array in accordance with a storage access protocol; and a disk adapter interrupt routine responding to interrupts from the disk adapters when the disk adapters receive data blocks from the disk storage array; wherein threads of the network stack are incorporated into the real time threads that are scheduled by the real-time scheduler and executed exclusively by a plurality of the core CPUs that are not interrupted by the disk adapter interrupts so that the disk adapter interrupts do not interrupt execution of the network stack; and wherein instances of the storage access driver are hard affinity threads; and wherein the soft affinity threads include a multitude of instances of a thread of the file system stack for file access request processing so that file access request processing for a multitude of concurrent file access requests is load balanced over the core CPUs; wherein all of the network adapter interrupts are mapped to a single one of the core CPUs so that the single one of the core CPUs is interrupted by each of the network adapter interrupts to execute the network adapter interrupt routine, and the single one of the core CPUs is not interrupted by any of the disk adapter interrupts; wherein four of the core CPUs execute respective real-time threads, and at least one thread of the network stack is incorporated into each of the respective real-time threads of said four of the core CPUs, and the single one of the core CPUs executes one of the real time threads into which is incorporated at least one thread of the network stack; and wherein each of the core CPUs executes one hard affinity thread instance of the disk adapter driver. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A network file server comprising:
-
a data processor; a disk storage array storing data; network adapters linking the data processor to a data network for exchange of data packets between the data processor and clients in the data network in the data network; and storage adapters linking the data processor to the disk storage array for exchange of data blocks between the data processor and the disk storage array; wherein the data processor includes at least eight core central processing units (CPUs), and shared memory shared among the core CPUs and containing programs executed by the core CPUs; wherein the programs executable by the core CPUs include a real-time scheduler scheduling execution of real-time and general purpose threads, and a thread manager managing execution of hard affinity threads and soft affinity threads of the general purpose threads, each of the hard affinity threads being executed exclusively by a respective one of the core CPUs, and the thread manager distributing execution of the soft affinity threads among the core CPUs for load balancing; wherein the programs executed by the core CPUs further include; a network adapter interrupt routine responding to interrupts from the network adapters when the network adapters receive data packets from the data network; a network stack transmitting data through the network adapters between the data processor and the data network in accordance with a network data transmission protocol; a file system stack providing clients in the data network with access to the data storage array in accordance with a file system access protocol and maintaining an in-core file system cache in the shared memory; a storage access driver accessing the data storage array in accordance with a storage access protocol; and a disk adapter interrupt routine responding to interrupts from the disk adapters when the disk adapters receive data blocks from the disk storage array; wherein threads of the network stack are incorporated into the real time threads that are scheduled by the real-time scheduler and executed exclusively by a plurality of the core CPUs that are not interrupted by the disk adapter interrupts so that the disk adapter interrupts do not interrupt execution of the network stack; and wherein instances of the storage access driver are hard affinity threads; and wherein the soft affinity threads include a multitude of instances of a thread of the file system stack for file access request processing so that file access request processing for a multitude of concurrent file access requests is load balanced over the core CPUs; wherein four of the core CPUs execute respective real-time threads, and at least one thread of the network stack is incorporated into each of the respective real-time threads of said four of the core CPUs; wherein pairs of the core CPUs share respective level-two (L2) cache memories, and each core CPU that executes one of the real-time threads into which is incorporated at least one thread of the network stack shares a respective one of the level-two (L2) cache memories with another one of the core CPUs that executes one of the real time threads into which is incorporated at least one thread of the network stack; and wherein each of the core CPUs executes one hard affinity thread instance of the disk adapter driver. - View Dependent Claims (17, 18, 19, 20)
-
Specification