Method of and system for physically distributed, logically shared, and data slice-synchronized shared memory switching
First Claim
1. A method of non-blocking output-buffered switching of time-successive lines of input data streams along a data path between N input and N output data ports provided with corresponding respective ingress and egress data line cards, and wherein each ingress data port line card receives L bits of data per second of an input data stream to be fed to M memory slices and written to the corresponding memory banks and ultimately read by the corresponding output port egress data line cards, the method comprising, creating a physically distributed logically shared memory datapath architecture wherein each line card is associated with a corresponding memory bank, a memory controller and a traffic manager;
- connecting each ingress line card to its corresponding memory bank and also to the memory bank of every other line card through an N×
M mesh, providing each input port ingress line card with data write access to all the M memory banks, and wherein each data link provides L/M bits per second path utilization;
connecting the M memory banks through an N×
M mesh to egress line cards of the corresponding output data ports, with each memory bank being connected not only to its corresponding output port but also to every other output port as well, providing each output port egress line card with data read access to all the M memory banks;
segmenting each of the successive lines of each input data stream at each ingress data line card into a row of M data segment slices along the line;
partitioning data queues for the memory banks into M physically distributed separate column slices of memory data storage locations or spaces, one corresponding to each data segment slice;
writing each such data segment slice of a line along the corresponding link of the ingress N×
M mesh into its corresponding memory bank column slice at the same predetermined corresponding storage location or space address in its respective corresponding memory bank column slices as the other data segment slices of the data line occupy in their respective memory bank column slice, whereby the writing-in and storage of the data line slices occurs in lockstep as a row across the M memory bank column slices; and
writing the data segment slices of the next successive data line into their corresponding memory bank column slices at the same queue storage location or space address thereof adjacent the storage location or space row address in that memory bank column slice of the corresponding data segment slice already written in from the preceding input data stream line.
1 Assignment
0 Petitions
Accused Products
Abstract
An improved data networking technique and apparatus using a novel physically distributed but logically shared and data-sliced synchronized shared memory switching datapath architecture integrated with a novel distributed data control path architecture to provide ideal output-buffered switching of data in networking systems, such as routers and switches, to support the increasing port densities and line rates with maximized network utilization and with per flow bit-rate latency and jitter guarantees, all while maintaining optimal throughput and quality of services under all data traffic scenarios, and with features of scalability in terms of number of data queues, ports and line rates, particularly for requirements ranging from network edge routers to the core of the network, thereby to eliminate both the need for the complication of centralized control for gathering system-wide information and for processing the same for egress traffic management functions and the need for a centralized scheduler, and eliminating also the need for buffering other than in the actual shared memory itself,—all with complete non-blocking data switching between ingress and egress ports, under all circumstances and scenarios.
-
Citations
131 Claims
-
1. A method of non-blocking output-buffered switching of time-successive lines of input data streams along a data path between N input and N output data ports provided with corresponding respective ingress and egress data line cards, and wherein each ingress data port line card receives L bits of data per second of an input data stream to be fed to M memory slices and written to the corresponding memory banks and ultimately read by the corresponding output port egress data line cards, the method comprising,
creating a physically distributed logically shared memory datapath architecture wherein each line card is associated with a corresponding memory bank, a memory controller and a traffic manager; -
connecting each ingress line card to its corresponding memory bank and also to the memory bank of every other line card through an N×
M mesh, providing each input port ingress line card with data write access to all the M memory banks, and wherein each data link provides L/M bits per second path utilization;
connecting the M memory banks through an N×
M mesh to egress line cards of the corresponding output data ports, with each memory bank being connected not only to its corresponding output port but also to every other output port as well, providing each output port egress line card with data read access to all the M memory banks;
segmenting each of the successive lines of each input data stream at each ingress data line card into a row of M data segment slices along the line;
partitioning data queues for the memory banks into M physically distributed separate column slices of memory data storage locations or spaces, one corresponding to each data segment slice;
writing each such data segment slice of a line along the corresponding link of the ingress N×
M mesh into its corresponding memory bank column slice at the same predetermined corresponding storage location or space address in its respective corresponding memory bank column slices as the other data segment slices of the data line occupy in their respective memory bank column slice, whereby the writing-in and storage of the data line slices occurs in lockstep as a row across the M memory bank column slices; and
writing the data segment slices of the next successive data line into their corresponding memory bank column slices at the same queue storage location or space address thereof adjacent the storage location or space row address in that memory bank column slice of the corresponding data segment slice already written in from the preceding input data stream line. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 117, 118, 119, 120, 127, 128, 129, 130)
-
- 50. A method of non-blocking output-buffered switching of time-successive lines of input data streams along a data path between N input and N output data ports provided with corresponding respective ingress and egress data line cards, and wherein each ingress data port line card receives L bits of data per second of an input data stream to be fed to M memory slices and written to corresponding memory banks and ultimately read by corresponding output port egress data line cards, the method comprising, providing a non-blocking matrix of two-element memory stages for the memory banks to guarantee a non-blocking data write path from the N input ports and a non-blocking data read path from the N output ports, wherein the memory stages comprise a combined SRAM memory element enabling temporary data storage therein that builds blocks of data on a per queue basis, and a relatively low speed DRAM main memory element for providing main data packet buffer memory.
-
64. Apparatus for non-blocking output-buffered switching of time-successive lines of input data streams along a data path between N input and N output data ports provided with corresponding respective ingress and egress data line cards, and wherein each ingress data port line card receives L bits of data per second of an input data stream to be fed to M memory slices and written to the corresponding memory banks and ultimately read by the corresponding output port egress data line cards, the apparatus having, in combination,
a physically distributed logically shared memory datapath of architecture wherein each line card is associated with a corresponding memory bank, a memory controller and a traffic manager, and wherein each ingress line card is connected to its corresponding memory bank and also to the memory bank of every other line card through an N× - M mesh, providing each input port ingress line card with data write access to all the M memory banks, and wherein each data link provides L/M bits per second path utilization;
a further N×
M mesh connecting the M memory banks to egress line cards of the corresponding output data ports, with each memory bank being connected not only to its corresponding output port but also to every other output port as well, providing each output port egress line card with data read access to all the M memory banks;
means for segmenting each of the successive lines of each input data stream at each ingress data line card into a row of M data segment slices along the line;
means for partitioning data queues for the memory banks into M physically distributed separate column slices of memory data storage locations or spaces, one corresponding to each data segment slice;
means for writing each such data segment slice of a line along the corresponding link of the ingress N×
M mesh into its corresponding memory bank column slice and at the same predetermined corresponding storage location or space address in its respective corresponding memory bank column slice as the other data segment slices of the data line occupy in their respective memory bank column slice, whereby the writing-in and storage of the data line slices occurs in lockstep as a row across the M memory bank column slices; and
means for writing the data segment slices of the next successive data line into their corresponding memory bank column slices at the same queue storage location or space address thereof adjacent the storage location or space row address in that memory bank column slice of the corresponding data segment slice already written in from the preceding input data stream line. - View Dependent Claims (65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 121, 122, 123, 124, 126)
- M mesh, providing each input port ingress line card with data write access to all the M memory banks, and wherein each data link provides L/M bits per second path utilization;
-
125. An apparatus for non-blocking output-buffered switching of time-successive lines of input data streams along a data path between N ingress and N egress data ports provided with corresponding respective ingress and egress data line cards, and wherein each ingress data port line card receives L bits of data per second of an input data stream to be fed to M memory slices and written to the corresponding memory banks and ultimately read by the corresponding output port egress data line cards, the apparatus having, in combination, a non-blocking matrix of two-element memory stages for the memory banks to guarantee a non-blocking data write path from the N ingress ports and a non-blocking data read path from the N egress ports, wherein the memory stages comprise a combined SRAM memory element enabling temporary data storage therein that builds blocks of data on a per queue basis, and a relatively low speed DRAM main memory element for providing primary data packet buffer memory.
Specification