PIPELINED PACKET SWITCHING AND QUEUING ARCHITECTURE
First Claim
1. A method for switching packets comprising:
- receiving a packet comprising a header portion and a corresponding tail portion;
processing the header portion using a header processing pipeline, whereinthe header processing pipeline comprises a plurality of pipeline stage circuits connected in a sequence, wherein the plurality of pipeline stage circuits comprises at least a fetch stage circuit and a gather stage circuit,each stage circuit of the plurality of pipeline stage circuits is configured to pass data to a next circuit,said processing comprisesreceiving the header portion and storing the header portion in a packet header buffer, wherein said receiving the header portion and storing the header portion are preformed by the fetch stage circuit,receiving packet type information related to a packet type associated with the header portion from a preceding stage circuit of the plurality of pipeline stage circuits,selecting a processing profile based on the packet type information,processing the header portion in accord with the processing profile to generate a modified header portion, andoutputting a modified header portion, whereinsaid receiving the packet type information, selecting the processing profile, processing the header portion in accord with the processing profile, and outputting are performed by the gather stage circuit.
0 Assignments
0 Petitions
Accused Products
Abstract
An architecture for a line card in a network routing device is provided. The line card architecture provides a bi-directional interface between the routing device and a network, both receiving packets from the network and transmitting the packets to the network through one or more connecting ports. In both the receive and transmit path, packets processing and routing in a multi-stage, parallel pipeline that can operate on several packets at the same time to determine each packet'"'"'s routing destination is provided. Once a routing destination determination is made, the line card architecture provides for each received packet to be modified to contain new routing information and additional header data to facilitate packet transmission through the switching fabric. The line card architecture further provides for the use of bandwidth management techniques in order to buffer and enqueue each packet for transmission through the switching fabric to a corresponding destination port. The transmit path of the line card architecture further incorporates additional features for treatment and replication of multicast packets.
-
Citations
19 Claims
-
1. A method for switching packets comprising:
-
receiving a packet comprising a header portion and a corresponding tail portion; processing the header portion using a header processing pipeline, wherein the header processing pipeline comprises a plurality of pipeline stage circuits connected in a sequence, wherein the plurality of pipeline stage circuits comprises at least a fetch stage circuit and a gather stage circuit, each stage circuit of the plurality of pipeline stage circuits is configured to pass data to a next circuit, said processing comprises receiving the header portion and storing the header portion in a packet header buffer, wherein said receiving the header portion and storing the header portion are preformed by the fetch stage circuit, receiving packet type information related to a packet type associated with the header portion from a preceding stage circuit of the plurality of pipeline stage circuits, selecting a processing profile based on the packet type information, processing the header portion in accord with the processing profile to generate a modified header portion, and outputting a modified header portion, wherein said receiving the packet type information, selecting the processing profile, processing the header portion in accord with the processing profile, and outputting are performed by the gather stage circuit. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method for switching packets comprising:
-
receiving an ingress packet received from a corresponding network interface of a plurality of network interface; storing the ingress packet in a buffer memory queue corresponding to the network interface, wherein the buffer memory queue is one of a plurality of buffer memory queues each having a corresponding network interface of the plurality of network interfaces, and each buffer memory queue is coupled to an ingress data path of a line card; receiving a loopback packet from an egress data path of the line card; storing the loopback packet in a loopback buffer memory; selecting a selected packet from one of the plurality of buffer memory queues or the loopback buffer memory using fair bandwidth allocation, and providing the selected packet to a packet processor. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. A method for switching packets comprising:
-
storing one or more unicast packet headers in a first queue of a switch fabric interface; storing one or more multicast packet headers in a second queue of the switch fabric interface; storing packet tail data in a third queue of the switch fabric interface; receiving information from an egress packet processor by the switch fabric interface, wherein the egress packet processor is configured to process packets received from the switch fabric for transmission to a network interface; and selecting data from the first, second and third queues in response to the received information. - View Dependent Claims (15, 16, 17, 18, 19)
-
Specification