Logical separation and accessing of descriptor memories
First Claim
Patent Images
1. A network device comprising:
- a plurality of parallel processing engines, each processing engine in the plurality of parallel processing engines including;
a lower layer execution unit to;
receive first header information, where the first header information includes template data, tag descriptor data, and lower layer descriptor data, andprocess the received first header information to obtain lower layer header data based on the template data, the tag descriptor data, and the lower layer descriptor data, anda higher layer execution unit to;
receive second header information, andprocess the received second header information to obtain higher layer header data, the higher layer execution unit performing the processing of the received second header information in parallel with the lower layer execution unit performing the processing of the received first header information to form a combined header of the lower layer header data and the higher layer header data.
0 Assignments
0 Petitions
Accused Products
Abstract
A packet header processing engine includes a memory having a number of distinct portions for respectively storing different types of descriptor information for a header of a packet. A packet header processing unit includes a number of pointers corresponding to the number of distinct memory portions. The packet header processing unit is configured to retrieve the different types of descriptor information from the number of distinct memory portions and to generate header information from the different types of descriptor information.
50 Citations
11 Claims
-
1. A network device comprising:
-
a plurality of parallel processing engines, each processing engine in the plurality of parallel processing engines including; a lower layer execution unit to; receive first header information, where the first header information includes template data, tag descriptor data, and lower layer descriptor data, and process the received first header information to obtain lower layer header data based on the template data, the tag descriptor data, and the lower layer descriptor data, and a higher layer execution unit to; receive second header information, and process the received second header information to obtain higher layer header data, the higher layer execution unit performing the processing of the received second header information in parallel with the lower layer execution unit performing the processing of the received first header information to form a combined header of the lower layer header data and the higher layer header data. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method implemented in a network device comprising a plurality of parallel processing engines, the method comprising:
-
receiving, at a lower layer execution unit in one of the plurality of parallel processing engines, first header information, where the first header information includes template data, tag descriptor data, and lower layer descriptor data; generating, at the lower layer execution unit, lower layer header data using the template data, the tag descriptor data, and the lower layer descriptor data; receiving, at a higher layer execution unit in the one of the plurality of parallel processing engines, second header information; generating, at the higher layer execution unit, higher layer header data using the received second header data, the generating the higher layer header data occurring in parallel with generating the lower layer header data; and combining the lower layer header data and the higher layer header data to form a combined header. - View Dependent Claims (11)
-
Specification