Memory stream buffer with variable-size prefetch depending on memory interleaving configuration
First Claim
1. A method of buffering data read from a memory coupled to a CPU, wherein said memory is configured into one of a plurality of interleave patterns with other memories also coupled to said CPU, comprising the steps of:
- storing an address sequentially following the address used for a read request made to said memory by said CPU;
detecting if a subsequent read request is made using an address which is equal to the stored sequential address, and, if so, generating a stream detect signal;
in response to said stream detect signal, fetching data from said memory at addresses following the stored sequential address and storing said data in a buffer, the maximum number of blocks of said data fetched from said memory and stored in said buffer being inversely proportional to the number of memories interleaved according to said interleave pattern; and
if said CPU sends a read request to said memory for data and said requested data is in said buffer, sending said data from said buffer to said CPU without accessing said memory for said requested data.
2 Assignments
0 Petitions
Accused Products
Abstract
A read buffering system and method employs a bank of FIFOs to hold sequential read data for a number of data streams being fetched by a computer. The FIFOs are located in the memory controller, so the system bus is not used in the memory accesses needed to fill the stream buffer. The buffer system stores addresses used for read requests made by a CPU, and if a next sequential address is then detected in a subsequent read request, this is designated to be a stream (i.e., sequential reads). When a stream is thus detected, data is fetched from DRAM memory for addresses following the sequential address, and this prefetched data is stored in one of the FIFOs. A FIFO is selected using a least-recently-used algorithm. When the CPU subsequently makes a read request for data in a FIFO, this data can be returned without making a memory access, and so the access time seen by the CPU is shorter. By taking advantage of page mode, access to the DRAM memory for the prefetch operations can be transparent to the CPU, resulting in substantial performance improvement if sequential accesses are frequent. One feature is appending page mode read cycles to a normal read, in order to fill the FIFO. The data is stored in the DRAMs with ECC check bits, and error detection and correction (EDC) is performed on the read data downstream of the stream buffer, so the data in the stream buffer is protected by EDC.
122 Citations
20 Claims
-
1. A method of buffering data read from a memory coupled to a CPU, wherein said memory is configured into one of a plurality of interleave patterns with other memories also coupled to said CPU, comprising the steps of:
-
storing an address sequentially following the address used for a read request made to said memory by said CPU; detecting if a subsequent read request is made using an address which is equal to the stored sequential address, and, if so, generating a stream detect signal; in response to said stream detect signal, fetching data from said memory at addresses following the stored sequential address and storing said data in a buffer, the maximum number of blocks of said data fetched from said memory and stored in said buffer being inversely proportional to the number of memories interleaved according to said interleave pattern; and if said CPU sends a read request to said memory for data and said requested data is in said buffer, sending said data from said buffer to said CPU without accessing said memory for said requested data. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A system for reading data from a memory coupled to a CPU in response to read requests received from said CPU, wherein said memory is configured into one of a plurality of interleave patterns with other memories also coupled to said CPU, comprising:
-
a read buffer including a plurality of FIFOs, each FIFO having a plurality of entries; an address queue for receiving and storing an address sequentially following the address of a read request sent by said CPU to said memory during a period of said requests; a stream detect circuit for producing a stream detect signal in response to a read request having an address equal to the sequential address stored in said address queue; means responsive to said stream detect signal for selecting one of said FIFOs of said read buffer for storing sequential data; means for fetching data from said memory at addresses following the sequential address stored in said address queue and loading said fetched data into said selected FIFO, the maximum number of blocks of said data fetched from said memory and stored in said buffer being inversely proportional to the number of memories interleaved according to said interleave pattern; means, responsive to a read request from said CPU for data in said memory, for sending said requested data from said selected FIFO to said CPU without accessing said memory if said requested data is in said selected FIFO. - View Dependent Claims (7, 8, 9, 10)
-
-
11. A computer system, comprising:
-
(a) a CPU coupled to a memory by a system bus, the CPU sending memory read requests to said memory by said system bus, wherein said memory is configured into one of a plurality of interleave patterns with other memories also coupled to said CPU by said system bus; (b) a memory controller coupled between said memory and said system bus;
said memory controller including;a read buffer, the read buffer having a plurality of FIFOs, each FIFO having a plurality of entries; an address queue for receiving and storing the address of a read request sent by said CPU to said memory during a period of said requests; a stream detector for producing a stream detect signal in response to a subsequent read request having an address following the sequential address stored in said address queue and loading said fetched data into said selected FIFO, the maximum number of blocks of said data fetched from said memory and stored in said buffer being inversely proportional to the number of memories interleaved according to said interleave pattern; and means, responsive to a read request received from said CPU for data in said memory, for sending said requested data from said selected FIFO to said CPU without accessing said memory if said requested data is in said selected FIFO. - View Dependent Claims (12, 13, 14, 15, 16)
-
-
17. A memory system, coupled to a central processor unit (CPU), for providing data to said CPU in response to a plurality of read requests from said CPU, said memory system comprising:
-
a memory interleaved with one or more other memories; a stream buffer coupled to said memory; means, responsive to said read requests from said CPU, for detecting a sequential relationship between addresses of successive read requests; means, upon detecting said sequential relationship, for fetching one or more blocks of data from said memory from an address following said sequentially related addresses, a maximum number of blocks of data fetched from said memory being inversely proportional to the number of memories interleaved in said memory system; means for storing said fetched data in said stream buffer; means for detecting a transaction initiated by said CPU during said fetching; and means for discontinuing said fetching upon detecting said transaction. - View Dependent Claims (18, 19, 20)
-
Specification