Data processing system having memory controller for supplying current request and next request for access to the shared memory pipeline
First Claim
1. A data processing system, comprising;
- a shared memory comprising synchronous dynamic random access memory SDRAM and a shared memory pipeline coupled to the SDRAM, which executes read, write and refresh requests for access in a pipeline fashion;
a plurality of data paths coupled to the shared memory which generate requests for access to the shared memory, the requests having characteristics including a starting address, a length and a type; and
a memory controller, coupled to the plurality of data paths, which stores requests from the plurality of data paths, supplies a current request for access to the shared memory pipeline, and selects from the stored requests, a next request for access for the shared memory pipeline.
7 Assignments
0 Petitions
Accused Products
Abstract
A router includes synchronous dynamic random access memory (SDRAM) based shared memory, with a controller configured to control the order in which the SDRAM access is granted to a plurality of interfaced components. In one embodiment, the controller'"'"'s configuration minimizes the amount of time data from a particular source must wait to be read to and written from the SDRAM, and thus minimizes latency. In a different embodiment, the controller'"'"'s configuration maximizes the amount of data read to and written from said SDRAM in a given amount of time and thus maximizes bandwidth. In yet another embodiment, characteristics of the latency minimization embodiment and the bandwidth maximization embodiment are combined to create a hybrid configuration.
-
Citations
16 Claims
-
1. A data processing system, comprising;
-
a shared memory comprising synchronous dynamic random access memory SDRAM and a shared memory pipeline coupled to the SDRAM, which executes read, write and refresh requests for access in a pipeline fashion; a plurality of data paths coupled to the shared memory which generate requests for access to the shared memory, the requests having characteristics including a starting address, a length and a type; and a memory controller, coupled to the plurality of data paths, which stores requests from the plurality of data paths, supplies a current request for access to the shared memory pipeline, and selects from the stored requests, a next request for access for the shared memory pipeline. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A network intermediate system, comprising:
-
a backplane bus coupled to a plurality of input/output nodes; a central processing node coupled to the backplane bus, including a host processor and a shared memory comprising synchronous dynamic random access memory SDRAM and a shared memory pipeline coupled to the SDRAM, which executes read, write and refresh requests for access in a pipeline fashion; the host processor and the plurality of input/output nodes generating requests for access to the shared memory, the requests having characteristics including a starting address, a length and a type; and a memory controller, coupled to the host processor and the backplane bus, which stores requests from the host processor and the plurality of input/output nodes, supplies a current request for access to the shared memory pipeline, and selects from the stored requests, a next request for access for the shared memory pipeline. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15)
-
-
16. A network intermediate system, comprising:
-
a backplane bus coupled to a plurality of input/output nodes; a central processing node coupled to the backplane bus, including a host processor and a shared memory comprising synchronous dynamic random access memory SDRAM and a shared memory pipeline coupled to the SDRAM, which executes read, write and refresh requests for access in a pipeline fashion; the host processor and the plurality of input/output nodes generating requests for access to the shared memory, the requests having characteristics including a starting address, a length and a type; and a memory controller, coupled to the host processor and the backplane bus, which stores requests from the host processor and the plurality of input/output nodes, supplies a current request for access to the shared memory pipeline, and selects from the stored requests, a next request for access for the shared memory pipeline, including logic responsive to the characteristics of a request currently in the shared memory pipeline, and to the characteristics of the pending requests to select a next request for access to improve pipeline fullness.
-
Specification