Pipelined multiple issue packet switch
First Claim
1. A packet switch comprisinga fetch engine coupled to a source of packet headers;
- a plurality of fetch caches coupled to said fetch engine, and disposed to store at least portions of packet headers received therefrom;
a plurality of switch engines, each coupled to a corresponding one of said fetch caches, and disposed to read said portions of packet headers therefrom;
a plurality of reorder/rewrite buffers, each said reorder/rewrite buffer coupled to one of said switch engines, and disposed to store pointers to packet headers received from at least one of said plurality of switch engines;
a reorder/rewrite engine coupled to said plurality of reorder/rewrite buffers, and disposed to read pointers to packet headers therefrom in an order said packet headers were originally received;
a post-process queue coupled to said reorder/rewrite engine, and disposed to store pointers to packet headers received therefrom; and
a post-process engine coupled to said post-process queue, and disposed to process said packet headers.
3 Assignments
0 Petitions
Accused Products
Abstract
A pipelined multiple issue architecture for a link layer or protocol layer packet switch, which processes packets independently and asynchronously, but reorders them into their original order, thus preserving the original incoming packet order. Each stage of the pipeline waits for the immediately previous stage to complete, thus causing the packet switch to be self-throttling and thus allowing differing protocols and features to use the same architecture, even if possibly requiring differing processing times. The multiple issue pipeline is scaleable to greater parallel issue of packets, and tunable to differing switch engine architectures, differing interface speeds and widths, and differing clock rates and buffer sizes. The packet switch comprises a fetch stage, which fetches the packet header into one of a plurality of fetch caches, a switching stage comprising a plurality of switch engines, each of which independently and asychronously reads from corresponding fetch caches, makes switching decisions, and write to a reorder memory, a reorder engine which reads from the reorder memory in the packets'"'"' original order, and a post-processing stage, comprising a post-process queue and a post-process engine, which performs protocol-specific post-processing on the packets.
-
Citations
40 Claims
-
1. A packet switch comprising
a fetch engine coupled to a source of packet headers; -
a plurality of fetch caches coupled to said fetch engine, and disposed to store at least portions of packet headers received therefrom; a plurality of switch engines, each coupled to a corresponding one of said fetch caches, and disposed to read said portions of packet headers therefrom; a plurality of reorder/rewrite buffers, each said reorder/rewrite buffer coupled to one of said switch engines, and disposed to store pointers to packet headers received from at least one of said plurality of switch engines; a reorder/rewrite engine coupled to said plurality of reorder/rewrite buffers, and disposed to read pointers to packet headers therefrom in an order said packet headers were originally received; a post-process queue coupled to said reorder/rewrite engine, and disposed to store pointers to packet headers received therefrom; and a post-process engine coupled to said post-process queue, and disposed to process said packet headers.
-
-
2. A packet switch comprising:
-
a switching stage; a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to said switching stage; said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post process stage; and said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers. - View Dependent Claims (3)
-
-
4. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage, wherein said fetch stage comprises a fetch engine, said fetch engine being disposed to fetch a first block of M bytes of a packet header in response to a first signal, and being disposed to fetch an additional block of L bytes of said packet header in response to a second signal; -
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; and said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers. - View Dependent Claims (5, 6, 7, 8)
-
-
9. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage, wherein said fetch stage comprises a plurality of fetch caches, each one of said fetch caches being coupled to said source of packet headers and each being disposed to store at least a portion of a packet header; -
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; and said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers. - View Dependent Claims (10)
-
-
11. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage, wherein said fetch stage comprises a fetch engine coupled to said source of packet headers; -
a plurality of fetch caches coupled to said fetch engine, each said fetch cache comprising a plurality of buffers; wherein said fetch engine is disposed to write at least a portion of each said packet header in sequence to each said fetch cache in a selected buffer thereof; wherein said switching stage comprises a switch engine for each said fetch cache, wherein each said switch engine is disposed to read at least a portion of said packet header in sequence from each said buffer of said fetch cache; said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; and said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers. - View Dependent Claims (12)
-
-
13. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage, wherein said switching stage comprises a plurality of reorder/rewrite memories, each one of said reorder/rewrite memories being disposed to store a pointer to a packet header; -
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; and said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers.
-
-
14. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage; -
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers; and wherein said switching stage comprises a plurality of switch engines, each being disposed to receive a packet header and to produce a set of results for switching said packet header; a plurality of reorder/rewrite memories, each one of said reorder/ rewrite memories being disposed to store a packet header; and a reorder/ rewrite processor coupled to said plurality of reorder/ rewrite memories and disposed to receive said packet headers from said reorder/ rewrite memories in an order in which said packet headers were originally received.
-
-
15. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage; -
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers; and wherein said switching stage comprises a plurality of switch engines, each being disposed to receive a packet header and to produce a set of results for switching said packet header; and a plurality of reorder/rewrite memories, each one of said reorder/rewrite memories being disposed to store a packet header; said reorder/rewrite memories being divided into sets, each said set of reorder/rewrite memories being assigned to and receiving outputs from exactly one said switch engine.
-
-
16. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage; -
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers; and wherein said switching stage comprises a plurality of switching engines, each said switching engine having a plurality of reorder/rewrite memories coupled thereto, each said switching engine being disposed to write in sequence to one of said plurality of reorder/rewrite memories; and a reorder/rewrite engine coupled to all said reorder/rewrite memories, said reorder/rewrite engine being disposed to read in sequence from said reorder/rewrite memories. - View Dependent Claims (17)
-
-
18. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage; -
said switching stage coupled to said fetch stage, said switching stare being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers, wherein said post-process stage comprises a plurality of post-processing memories, each one of said post-processing memories being disposed to store a pointer to a packet header.
-
-
19. A packet switch, comprising
a fetch stage coupled to a source of packet headers, said fetch stage being disposed to fetch at least portions of packet headers and present said portions of packet headers in parallel to a switching stage; -
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage; said post-process stage coupled to said switching stage, and being disposed to perform protocol-specific processing on said packet headers, wherein said post-process stage comprises a post-processor coupled to said switching stage and disposed to alter at least a portion of a packet header responsive to a switching protocol.
-
-
20. A system, comprising
a packet memory; -
a plurality of packet switches coupled to said packet memory; a plurality of reorder memories coupled to said plurality of packet switches; and a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; wherein, each packet switch comprises a fetch stage coupled to said packet memory, said fetch state being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage; and
said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage. - View Dependent Claims (21)
-
-
22. A system, comprising
a packet memory; -
a plurality of reorder memories; a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; and a plurality of packet switches coupled to said packet memory and said plurality of reorder memories, wherein each one of said plurality of packet switches comprises a fetch stage coupled to said packet memory, said fetch stage being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage, wherein said fetch stage comprises a fetch engine, said fetch engine being disposed to fetch a first block of M bytes of a packet header in response to a first signal, and being disposed to fetch an additional block of L bytes of said packet header in response to a second signal; and said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage. - View Dependent Claims (23)
-
-
24. A system, comprising
a packet memory; -
a plurality of reorder memories; a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; and a plurality of packet switches coupled to said packet memory and said plurality of reorder memories, wherein each one of said plurality of packet switches comprises a fetch stage coupled to said packet memory said fetch stage being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage, wherein said fetch stage comprises a plurality of fetch caches, each one of said fetch caches being coupled to said source of packet headers and each being disposed to store at least a portion of a packet header; and said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage. - View Dependent Claims (25)
-
-
26. A system, comprising
a packet memory; -
a plurality of reorder memories; a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; and a plurality of packet switches coupled to said packet memory and said plurality of reorder memories, wherein each one of said plurality of packet switches comprises a fetch stage coupled to said packet memory said fetch stage being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage, wherein said fetch stage comprises a fetch engine coupled to said source of packet headers; and a plurality of fetch caches coupled to said fetch engine, each said fetch cache comprising a plurality of buffers; wherein said fetch engine is disposed to write at least a portion of each said packet header in sequence from each said fetch cache in a selected buffer thereof; wherein said switching stage comprises a switch engine for each said fetch cache, wherein each said switch engine is disposed to read at least a portion of said packet header in sequence from each said buffer of said fetch cache; said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage.
-
-
27. A system, comprising
a packet memory; -
a plurality of reorder memories; a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; and a plurality of packet switches coupled to said packet memory and said plurality of reorder memories, wherein each one of said plurality of packet switches comprises a fetch stage coupled to said packet memory said fetch stage being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage; wherein said switching stage comprises a plurality of switch engines, each being disposed to receive a packet header and to produce a set of results for switching said packet header; a plurality of reorder/rewrite memories, each one of said reorder/rewrite memories being disposed to store a packet header; and a reorder/rewrite processor coupled to said plurality of reorder/rewrite memories and disposed to receive said packet headers from said reorder/rewrite memories in an order in which said packet headers were originally received; said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage. - View Dependent Claims (28)
-
-
29. A system, comprising
a packet memory; -
a plurality of reorder memories; a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; and a plurality of packet switches coupled to said packet memory and said plurality of reorder memories, wherein each one of said plurality of packet switches comprises a fetch stage coupled to said packet memory, said fetch stage being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage; wherein said switching stage comprises a plurality of switch engines, each being disposed to receive a packet header and to produce a set of results for switching said packet header; and a plurality of reorder/rewrite memories, each one of said reorder/rewrite memories being disposed to store a packet header; said reorder/rewrite memories being divided into sets, each said set of reorder/rewrite memories being assigned to and receiving outputs from exactly one said switch engine; and said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage.
-
-
30. A system, comprising
a packet memory; -
a plurality of reorder memories; a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; and a plurality of packet switches coupled to said packet memory and said plurality of reorder memories wherein each one of said plurality of packet switches comprises a fetch stage coupled to said packet memory said fetch stage being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage; wherein said switching stage comprises a plurality of switching engines, each said switching engine having a plurality of reorder/rewrite memories coupled thereto, each said switching engine being disposed to write in sequence to one of said plurality of reorder/rewrite memories; and a reorder/rewrite engine coupled to all said reorder/rewrite memories, said reorder/rewrite engine being disposed to read in sequence from said reorder/rewrite memories; said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage.
-
-
31. A system, comprising
a packet memory; -
a plurality of reorder memories; a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received; and a plurality of packet switches coupled to said packet memory and said plurality of reorder memories, wherein each one of said plurality of packet switches comprises a fetch stage coupled to said packet memory, said fetch stage being disposed to fetch packet headers from said packet memory and present at least portions of packet headers in parallel to a switching stage; said switching stage coupled to said fetch stage, said switching stage being disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order to a post-process stage, wherein said switching stage comprises a plurality of reorder/rewrite memories, each one of said reorder/rewrite memories being disposed to store a pointer to a packet header.
-
-
32. A method of switching packets, said method comprising
fetching a sequence of packet headers corresponding to said packets from a source of said packet headers; -
presenting said packet headers in parallel to a plurality of switch engines; operating said switch engines to switch said packets asynchronously in parallel; presenting switched packet headers in their original order to a post-processor; and operating said post-processor to perform protocol-specific processing on said packet headers.
-
-
33. A method of switching packets, said method comprising
fetching a sequence of packet headers corresponding to said packets from a source of said packet headers, wherein said step of fetching includes fetching a first block of M bytes of a packet header in response to a first signal and fetching an additional block of L bytes of said packet header in response to a second signal; -
presenting said packet headers in parallel to a plurality of switch engines; operating said switch engines to switch said packets asynchronously in parallel; presenting switched packet headers in their original order to a post-processor; and operating said post-processor to perform protocol-specific processing on said packet headers. - View Dependent Claims (34)
-
-
35. A method of switching packets, said method comprising
fetching a sequence of packet headers corresponding to said packets from a source of said packet headers, wherein said step of fetching includes storing said packet headers in sequence into a plurality of fetch caches; -
presenting said packet headers in parallel to a plurality of switch engines; operating said switch engines to switch said packets asynchronously in parallel; presenting switched packet headers in their original order to a post-processor; and operating said post-processor to perform protocol-specific processing on said packet headers.
-
-
36. A method of switching packets, said method comprising
fetching a sequence of packet headers corresponding to said packets from a source of said packet headers; -
presenting said packet headers in parallel to a plurality of switch engines, operating said switch engines to switch said packets asynchronously in parallel; presenting switched packet headers in their original order to a post-processor; and operating said post-processor to perform protocol-specific processing on said packet headers wherein said step of operating said post-processor stage comprises altering at least a portion of a packet header.
-
-
37. A method of switching packets, said method comprising
fetching a sequence of packet headers corresponding to said packets from a source of said packet headers; -
presenting said packet headers in parallel to a plurality of switch engines; operating said switch engines to switch said packets asynchronously in parallel, wherein said step of operating said switch engines includes coupling each said packet header to a selected fetch cache, coupling each said fetch cache to a selected switch engine, and coupling a set of results from said selected switch engine to a reorder/rewrite memory; presenting switched packet headers in their original order to a post-processor; and operating said post-processor to perform protocol-specific processing on said packet headers. - View Dependent Claims (38)
-
-
39. A system, including;
-
a packet memory; a plurality of packet switches coupled to said packet memory, wherein each packet switch includes a fetch stage and a switching stage; said fetch stage being coupled to said packet memory and disposed to fetch packet headers from said packet memory and present at least portions of said packet headers in parallel to said switching stage; said switching stage being coupled to said fetch stage and disposed to switch said packet headers asynchronously in parallel and present said packet headers in their original order; a plurality of reorder memories coupled to said plurality of packet switches; and a reorder engine coupled to said plurality of reorder memories and disposed to receive packet headers from said reorder memories in an order in which they were originally received. - View Dependent Claims (40)
-
Specification