Network device architecture for consolidating input/output and reducing latency
First Claim
1. A method for processing more than one type of network traffic in a single network device, the method comprising:
- partitioning buffers of the network device into first buffer spaces for storing frames received on a first virtual lane of a physical link of the network device and second buffer spaces for storing frames received on a second virtual lane of the physical link of the network device;
receiving a plurality of frames into the physical link of the network device, wherein each frame indicates either a first virtual lane or a second virtual lane based on whether or not, respectively, the frame indicates a protocol that recovers from dropping of frames; and
for each received frame, applying either a first set of rules or a second set of rules with respect to the received frame based on whether the received frame specifies the first virtual lane or the second virtual lane, respectively, wherein the first set of rules causes the received frame to be dropped or stored in the first buffer based on whether or not, respectively, the first buffer spaces has been filled a predetermined amount, wherein the second set of rules inhibit dropping of the received frame and causes the received frame to be stored in the second buffer spaces in response to latency.
1 Assignment
0 Petitions
Accused Products
Abstract
The present invention provides methods and devices for implementing a Low Latency Ethernet (“LLE”) solution, also referred to herein as a Data Center Ethernet (“DCE”) solution, which simplifies the connectivity of data centers and provides a high bandwidth, low latency network for carrying Ethernet and storage traffic. Some aspects of the invention involve transforming FC frames into a format suitable for transport on an Ethernet. Some preferred implementations of the invention implement multiple virtual lanes (“VLs”) in a single physical connection of a data center or similar network. Some VLs are “drop” VLs, with Ethernet-like behavior, and others are “no-drop” lanes with FC-like behavior. Some preferred implementations of the invention provide guaranteed bandwidth based on credits and VL. Active buffer management allows for both high reliability and low latency while using small frame buffers. Preferably, the rules for active buffer management are different for drop and no drop VLs.
-
Citations
23 Claims
-
1. A method for processing more than one type of network traffic in a single network device, the method comprising:
-
partitioning buffers of the network device into first buffer spaces for storing frames received on a first virtual lane of a physical link of the network device and second buffer spaces for storing frames received on a second virtual lane of the physical link of the network device; receiving a plurality of frames into the physical link of the network device, wherein each frame indicates either a first virtual lane or a second virtual lane based on whether or not, respectively, the frame indicates a protocol that recovers from dropping of frames; and for each received frame, applying either a first set of rules or a second set of rules with respect to the received frame based on whether the received frame specifies the first virtual lane or the second virtual lane, respectively, wherein the first set of rules causes the received frame to be dropped or stored in the first buffer based on whether or not, respectively, the first buffer spaces has been filled a predetermined amount, wherein the second set of rules inhibit dropping of the received frame and causes the received frame to be stored in the second buffer spaces in response to latency. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A network device, comprising:
-
means for partitioning buffers of the network device into first buffer spaces for storing frames received on a first virtual lane of a physical link of the network device and second buffer spaces for storing frames received on a second virtual lane of the physical link of the network device; means for receiving a plurality of frames into the physical link of the network device, wherein each frame indicates either a first virtual lane or a second virtual lane based on whether or not, respectively, the frame indicates a protocol that recovers from dropping of frames; and means, for each received frame, applying either a first set of rules or a second set of rules with respect to the received frame based on whether the received frame specifies the first virtual lane or the second virtual lane, respectively, wherein the first set of rules causes the received frame to be dropped or stored in the first buffer based on whether or not, respectively, the first buffer spaces has been filled a predetermined amount, wherein the second set of rules inhibit dropping of the received frame and causes the received frame to be stored in the second buffer spaces in response to latency.
-
-
20. A network device, comprising:
-
a plurality of ports configured for receiving frames on a plurality of physical links; a plurality of line cards, each line card in communication with one of the plurality of ports and configured to do the following; receive frames from a first one of the physical links of the plurality of ports, wherein each frame indicates either the first virtual lane or the second virtual lane based on whether or not, respectively, the frame indicates a protocol that recovers from dropping of frames; identify first frames received on a first virtual lane of the first physical link and second frames received on a second virtual lane of the first physical link; partition buffers into first buffer spaces for storing the identified first frames and second buffer spaces for storing the identified second frames; and for each received frame, apply either a first set of rules or a second set of rules with respect to the received frame based on whether the received frame specifies the first virtual lane or the second virtual lane, respectively, wherein the first set of rules causes the received frame to be dropped or stored in the first buffer based on whether or not, respectively, the first buffer spaces has been filled a predetermined amount, wherein the second set of rules inhibit dropping of the received frame and causes the received frame to be stored in the second buffer spaces in response to latency.
-
-
21. A method for carrying more than one type of traffic in a single network device, the method comprising:
-
identifying received frames as first frames received on first virtual lanes and second frames received on second virtual lanes based on whether or not, respectively, the received frames indicate a protocol that recovers from dropping of frames; dynamically partitioning buffers of the network device into first buffer spaces having first virtual queues (VOQ'"'"'s) for storing the first frames and second buffer spaces having second virtual queues (VOQ'"'"'s) for storing the second frames, wherein the buffers are dynamically partitioned according to one or more factors including overall buffer occupancy, buffer occupancy per virtual lane, time of day, traffic loads, congestion, guaranteed minimum bandwidth allocation, known tasks requiring greater bandwidth or maximum bandwidth allocation; and for each received frame, applying either a first set of rules or a second set of rules with respect to the received frame based on whether the received frame has been identified on the first virtual lane or on the second virtual lane, respectively, wherein the first set of rules causes the received frame to be dropped or stored in the first VOQs of the first buffer spaces based on whether or not, respectively, the first buffer has been filled a predetermined amount, wherein the second set of rules inhibit dropping of the received frame and causes the received frame to be stored in the second VOQs of the second buffer spaces in response to latency.
-
-
22. A network device, comprising:
-
means for identifying received frames as first frames received on first virtual lanes and second frames received on second virtual lanes based on whether or not, respectively, the received frames indicate a protocol that recovers from dropping of frames; means for dynamically partitioning buffers of the network device into first buffer spaces having first virtual queues (VOQ'"'"'s) for storing the first frames and second buffer spaces having second virtual queues (VOQ'"'"'s) for storing the second frames, wherein the buffers are dynamically partitioned according to one or more factors including overall buffer occupancy, buffer occupancy per virtual lane, time of day, traffic loads, congestion, guaranteed minimum bandwidth allocation, known tasks requiring greater bandwidth or maximum bandwidth allocation; and for each received frame, applying either a first set of rules or a second set of rules with respect to the received frame based on whether the received frame has been identified on the first virtual lane or on the second virtual lane, respectively, wherein the first set of rules causes the received frame to be dropped or stored in the first VOQs of the first buffer spaces based on whether or not, respectively, the first buffer has been filled a predetermined amount, wherein the second set of rules inhibit dropping of the received frame and causes the received frame to be stored in the second VOQs of the second buffer spaces in response to latency.
-
-
23. A network device, comprising:
-
a plurality of ports configured for receiving frames on a plurality of physical links; a plurality of line cards, each line card in communication with one of the plurality of ports and configured to do the following; identify first frames received frames as first frames received on first virtual lanes and second frames received on second virtual lanes based on whether or not, respectively, the received frames indicate a protocol that recovers from dropping of frames; dynamically partition buffers of the network device into first buffer spaces having first virtual queues (VOQ'"'"'s) for storing the first frames and second buffer spaces having second virtual queues (VOQ'"'"'s) for storing the second frames, wherein the buffers are dynamically partitioned according to one or more factors including overall buffer occupancy, buffer occupancy per virtual lane, time of day, traffic loads, congestion, guaranteed minimum bandwidth allocation, known tasks requiring greater bandwidth or maximum bandwidth allocation; and for each received frame, apply either a first set of rules or a second set of rules with respect to the received frame based on whether the received frame has been identified on the first virtual lane or on the second virtual lane, respectively, wherein the first set of rules causes the received frame to be dropped or stored in the first VOQs of the first buffer spaces based on whether or not, respectively, the first buffer has been filled a predetermined amount, wherein the second set of rules inhibit dropping of the received frame and causes the received frame to be stored in the second VOQs of the second buffer spaces in response to latency.
-
Specification