Pipeline utilizing an integral cache for transferring data to and from a register
First Claim
1. A load/store pipeline in a scalar processor for loading data to registers and storing data from said registers, the pipeline comprising:
- a) a register file which holds the results of an arithmetic logic operation;
b) a translation buffer and a cache tag look up which are both coupled in parallel to the register file and receive a virtual address from the register file, said translation buffer performing a translation of the virtual address into a physical address, the cache tag look up performing a look up on untranslated bits of said virtual address;
c) a comparator for comparing an output of said cache tag look up and an output of said translation buffer and producing a hit or miss signal based on said comparison;
d) a data cache coupled to said register file which stores or retrieves data when there is a hit signal generated by said comparator;
e) an output fifo coupled to said translation buffer which sends information out of said pipeline when there is a miss signal generated by said comparator to request data which is to be filled into said data cache;
f) an input buffer coupled to said data cache which receives data from out of said pipeline which is to be filled into said data cache; and
g) a memory reference tag which sends the tag corresponding to the data received by the input buffer to be stored in said cache tag look up.
2 Assignments
0 Petitions
Accused Products
Abstract
A load/store pipeline in a computer processor for loading data to registers and storing data from the registers has a cache memory within the pipeline for storing data. The pipeline includes buffers which support multiple outstanding read request misses. Data from out of the pipeline is obtained independently of the operation of the pipeline, this data corresponding to the request misses. The cache memory can then be filled with the requested for data. The provision of a cache memory within the pipeline, and the buffers for supporting the cache memory, speed up loading operations for the computer processor.
89 Citations
2 Claims
-
1. A load/store pipeline in a scalar processor for loading data to registers and storing data from said registers, the pipeline comprising:
-
a) a register file which holds the results of an arithmetic logic operation; b) a translation buffer and a cache tag look up which are both coupled in parallel to the register file and receive a virtual address from the register file, said translation buffer performing a translation of the virtual address into a physical address, the cache tag look up performing a look up on untranslated bits of said virtual address; c) a comparator for comparing an output of said cache tag look up and an output of said translation buffer and producing a hit or miss signal based on said comparison; d) a data cache coupled to said register file which stores or retrieves data when there is a hit signal generated by said comparator; e) an output fifo coupled to said translation buffer which sends information out of said pipeline when there is a miss signal generated by said comparator to request data which is to be filled into said data cache; f) an input buffer coupled to said data cache which receives data from out of said pipeline which is to be filled into said data cache; and g) a memory reference tag which sends the tag corresponding to the data received by the input buffer to be stored in said cache tag look up.
-
-
2. A pipeline in a computer processor for loading data from a cache memory to registers and storing data from the registers to the cache memory, the pipeline comprising:
-
a) an address generator operating to generate address information entries serially, each one of the address information entries relating to a corresponding one of a series of data entries; b) a cache tag look-- up and comparator device coupled to the address generator to receive the address information entries serially from the address generator and operating to perform a cache tag look13 up and tag compare operation, serially, for each one of the address information entries, the cache tag look-- up and comparator device outputting one of a hit or miss signal as a result of the cache tag look-- up and compare operation; c) a data cache including an input coupled to each of the address generator to receive the address information entries serially from the address generator and the cache tag look-- up and comparator device to receive the one of a hit or miss signal, and an output coupled to the registers; d) the data cache operating to selectively perform in respect of each of the data entries corresponding to the address information entries received from the address generator, one of a load of the each of the data entries from the cache memory to the registers and a store of each one of the data entries from the registers to the cache memory only if the cache tag look-- up and comparator device output generates a hit for the address information entry received from the address generator corresponding to the data entry; and e) the coupling between the data cache and the address generator including a delay device operating to delay the receiving of each one of the address information entries by the data cache relative to the receiving of the each of the address information entries by the cache tag look-- up and comparator device by at least one cycle time, said at least one cycle time equal to at least an access time of the cache tag look-- up and comparator device.
-
Specification