Instruction storage and cache miss recovery in a high speed multiprocessing parallel processing apparatus
First Claim
1. A cache miss engine fora data processing system havingan instruction cache, said instruction cache storing, in a distributed fashion, a plurality of instruction fields making up an instruction word,an interleaved memory system comprising a plurality of memory controllers, each controller controlling a plurality of memory banks, and said memory system able to output a plurality of data words each machine cycle,at least one control processing unit, andsaid instruction words being stored in a variable length compacted format in said memory system and in a fixed length format in the instruction cache, said variable length format including a decoding key and a plurality of fixed length non-zero instruction fields,said cache miss engine comprisingmeans for reading said decoding key,means for reading said instruction fields in a block mode for transmission to said instruction cache,means for decoding said decoding key for generating destination tags for each of said read instruction fields, andmeans for associating a said destination tag with each read instruction field for denoting a storage destination of said instruction field in the distributed instruction cache.
2 Assignments
0 Petitions
Accused Products
Abstract
A method and apparatus for storing an instruction word in a compacted form on a storage media, the instruction word having a plurality of instruction fields, features associating with each instruction word, a mask word having a length in bits at least equal to the number of instruction fields in the instruction word. Each instruction field is associated with a bit of the mask word and accordingly, using the mask word, only non-zero instruction fields need to be stored in memory. The instruction compaction method is advantageously used in a high speed cache miss engine for refilling portions of instruction cache after a cache miss occurs.
-
Citations
3 Claims
-
1. A cache miss engine for
a data processing system having an instruction cache, said instruction cache storing, in a distributed fashion, a plurality of instruction fields making up an instruction word, an interleaved memory system comprising a plurality of memory controllers, each controller controlling a plurality of memory banks, and said memory system able to output a plurality of data words each machine cycle, at least one control processing unit, and said instruction words being stored in a variable length compacted format in said memory system and in a fixed length format in the instruction cache, said variable length format including a decoding key and a plurality of fixed length non-zero instruction fields, said cache miss engine comprising means for reading said decoding key, means for reading said instruction fields in a block mode for transmission to said instruction cache, means for decoding said decoding key for generating destination tags for each of said read instruction fields, and means for associating a said destination tag with each read instruction field for denoting a storage destination of said instruction field in the distributed instruction cache.
Specification