Ingress Based Headroom Buffering For Switch Architectures
First Claim
1. A method comprising:
- in a network device;
maintaining a packet buffer configured to store network traffic received by a first network interface;
determining that the network traffic stored in the packet buffer exceeds a fill threshold, and in response;
sending a headroom buffering indication to the first network interface to cause the first network interface to store in-flight packet data received by the first network interface into an input buffer that is separate from the packet buffer.
6 Assignments
0 Petitions
Accused Products
Abstract
A network device performs ingress based headroom buffering. The network device may be configured as an output queue switch and include a main packet buffer that stores packet data according to a destination egress port. The network device may implement one or more ingress buffers associated with ingress data ports in the network device. The ingress buffers may be separate from the main packet buffer. The network device may identify a flow control condition triggered by an ingress data port, such as when an amount of data stored in the main packet buffer received through the ingress data port exceeds a fill threshold. In response, the network device may send a flow control message to a link partner to cease sending network traffic through the ingress data port. The network device may store in-flight data from the link partner in an ingress buffer instead of the main packet buffer.
-
Citations
20 Claims
-
1. A method comprising:
in a network device; maintaining a packet buffer configured to store network traffic received by a first network interface; determining that the network traffic stored in the packet buffer exceeds a fill threshold, and in response; sending a headroom buffering indication to the first network interface to cause the first network interface to store in-flight packet data received by the first network interface into an input buffer that is separate from the packet buffer. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
12. A device comprising:
-
a packet memory configured to store network traffic according to destination data ports for the network traffic; data port logic comprising; a data port; and an input buffer associated with the data port; and where the data port logic is operable to forward network traffic received through the data port for processing and storing in the packet buffer; buffering logic operable to; determine that an amount of the network traffic received through the data port and stored in the packet buffer exceeds a threshold amount, and in response; send a buffering indication to the data port logic; and where the data port logic is further operable to, in response to receiving the buffering indication, store subsequent network traffic received through the data port into the input buffer instead of forwarding the subsequent network traffic for processing and storing in the packet buffer. - View Dependent Claims (13, 14, 15)
-
-
16. A device comprising:
-
a distributed packet memory architecture comprising; a primary packet memory comprising; a first memory block configured to store packet data received through a first ingress port and destined for a first egress port; a second memory block configured to store packet data received through the first ingress port and destined for a second egress port different from the first egress port; and an ingress buffer separate from the packet memory; and system logic in communication the packet memory and the ingress buffer, the system logic configured to; selectively allocate a portion of the ingress buffer as additional memory space for packet data received at the first ingress port, in addition to the first memory block and second memory block. - View Dependent Claims (17, 18, 19, 20)
-
Specification