Method and mechanism to use a cache to translate from a virtual bus to a physical bus
First Claim
1. A method for translating from a virtually-addressed bus to a physically-addressed bus, comprising:
- presenting a virtual address for a memory line on the virtually-addressed bus;
initiating snoop processing of an intermediary inclusive storage device coupled to the virtually-addressed bus, the intermediary inclusive device capable of storing information related to the memory line from a main memory coupled to the physically-addressed bus;
storing in the intermediary inclusive storage device a pre-fetched memory line including an address tag and data and a pre-fetched status bit, wherein the pre-fetch status bit includes an ON and an OFF indication;
switching the pre-fetch status bit to OFF when the virtual address for the pre-fetched memory line is presented on the virtually addressed bus;
receiving one of a snoop hit and a snoop miss;
if a snoop hit, initiating further snoop processing on local caches coupled to the virtually-addressed bus; and
if a snoop miss, accessing a memory location in the main memory.
3 Assignments
0 Petitions
Accused Products
Abstract
A multi-processor computer architecture reduces processing time and bus bandwidth during snoop processing. The architecture includes processors and local caches. Each local cache corresponds to one of the processors. The architecture includes one or more virtual busses coupled to the local caches and the processors, and one or more intermediary caches, where at least one intermediary cache is coupled to each virtual bus. Each intermediary cache includes a memory array and means for ensuring the intermediary cache is inclusive of associated local caches. The architecture further includes a main memory having a plurality of memory lines accessible by the processors.
-
Citations
22 Claims
-
1. A method for translating from a virtually-addressed bus to a physically-addressed bus, comprising:
-
presenting a virtual address for a memory line on the virtually-addressed bus;
initiating snoop processing of an intermediary inclusive storage device coupled to the virtually-addressed bus, the intermediary inclusive device capable of storing information related to the memory line from a main memory coupled to the physically-addressed bus;
storing in the intermediary inclusive storage device a pre-fetched memory line including an address tag and data and a pre-fetched status bit, wherein the pre-fetch status bit includes an ON and an OFF indication;
switching the pre-fetch status bit to OFF when the virtual address for the pre-fetched memory line is presented on the virtually addressed bus;
receiving one of a snoop hit and a snoop miss;
if a snoop hit, initiating further snoop processing on local caches coupled to the virtually-addressed bus; and
if a snoop miss, accessing a memory location in the main memory. - View Dependent Claims (2, 3, 4)
-
-
5. A method for reducing processing time and bus bandwidth during snoop processing of a multi-processor computer architecture, the architecture comprising higher level caches and intermediary caches, the method, comprising:
-
establishing the intermediary caches as inclusive caches, wherein an inclusive intermediary cache includes at least all memory lines of corresponding higher level caches;
presenting a virtual address for a memory line on a virtually-addressed bus;
initiating snoop processing of the intermediary caches;
if receiving a snoop hit, initiating snoop processing on the higher level caches; and
if receiving a snoop miss, accessing main memory. - View Dependent Claims (6, 7, 8)
-
-
9. A multi-processor computer architecture for reducing processing time and bus bandwidth during snoop processing, comprising:
-
a plurality of processors;
a plurality of local caches, each local cache corresponding to one of the processors;
one or more virtual busses coupled to the local caches and the processors;
one or more intermediary caches, wherein at least one intermediary cache is coupled to each virtual bus, each intermediary cache comprising;
a memory array, and means for ensuring the intermediary cache is inclusive of associated local caches; and
a main memory having a plurality of memory lines accessible by the processors. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A mechanism for translating from a virtual bus to a physical interconnect, comprising:
-
a main memory storing memory lines;
processors coupled to the main memory and capable of accessing the memory lines; and
means for reducing processing time and bus bandwidth during snoop processing by the processors. - View Dependent Claims (20, 21, 22)
-
Specification