Fully connected cache coherent multiprocessing systems
DCFirst Claim
Patent Images
1. A multi-processor shared memory system comprising:
- a first set of point-to-point connections;
a first set of processors each coupled to one of the first set of point-to-point connections;
a first memory coupled to one of the first set of point-to-point connections;
a first flow control unit including a first data switch coupled to the first set of point-to-point connections wherein the first data switch is configured to interconnect the first set of point-to-point connections to provide first data paths between the first memory and the first set of processors;
a second set of point-to-point connections;
a second set of processors each coupled to one of the second set of point-to-point connections;
a second memory coupled to one of the second set of point-to-point connections;
a second flow control unit including a second data switch coupled to the second set of point-to-point connections wherein the second data switch is configured to interconnect the second set of point-to-point connections to provide second data paths between the second memory and the second set of processors; and
a third point-to-point connection coupled to the first data switch and to the second data switch wherein the first data switch is configured to interconnect the first set of point-to-point connections to the third point-to-point connection and the second data switch is configured to interconnect the second set of point-to-point connections to the third point-to-point connection to provide third data paths between the second memory and the first set of processors and between the first memory and the second set of processors.
14 Assignments
Litigations
1 Petition
Accused Products
Abstract
Fully connected multiple FCU-based architectures reduce requirements for Tag SRAM size and memory read latencies. A preferred embodiment of a symmetric multiprocessor system includes a switched fabric (switch matrix) for data transfers that provides multiple concurrent buses that enable greatly increased bandwidth between processors and shared memory. A high-speed point-to-point Channel couples command initiators and memory with the switch matrix and with I/O subsystems.
133 Citations
10 Claims
-
1. A multi-processor shared memory system comprising:
-
a first set of point-to-point connections;
a first set of processors each coupled to one of the first set of point-to-point connections;
a first memory coupled to one of the first set of point-to-point connections;
a first flow control unit including a first data switch coupled to the first set of point-to-point connections wherein the first data switch is configured to interconnect the first set of point-to-point connections to provide first data paths between the first memory and the first set of processors;
a second set of point-to-point connections;
a second set of processors each coupled to one of the second set of point-to-point connections;
a second memory coupled to one of the second set of point-to-point connections;
a second flow control unit including a second data switch coupled to the second set of point-to-point connections wherein the second data switch is configured to interconnect the second set of point-to-point connections to provide second data paths between the second memory and the second set of processors; and
a third point-to-point connection coupled to the first data switch and to the second data switch wherein the first data switch is configured to interconnect the first set of point-to-point connections to the third point-to-point connection and the second data switch is configured to interconnect the second set of point-to-point connections to the third point-to-point connection to provide third data paths between the second memory and the first set of processors and between the first memory and the second set of processors. - View Dependent Claims (2, 3, 4, 5)
the first set of processors include first caches; the second set of processors include second caches;
the first flow control unit is configured to maintain cache coherency between the first memory, the first caches, and the second caches; and
the second flow control unit is configured to maintain cache coherency between the second memory, the first caches, and the second caches.
-
-
3. The system of claim 2 wherein:
-
the first flow control unit is configured to maintain first duplicate tags for the first caches; and
the second flow control unit is configured to maintain second duplicate tags for the second caches.
-
-
4. The system of claim 1 wherein
the first flow control unit is configured to provide a first system serialization point for the first memory; - and
the second flow control unit is configured to provide a second system serialization point for the second memory.
- and
-
5. The system of claim 1 wherein the first set of processors and the second set of processors transfer packets that indicate one of the first memory and the second memory.
-
6. A method of operating a multi-processor shared memory system comprising a first set of point-to-point connections, a first set of processors each coupled to one of the first set of point-to-point connections, a first memory coupled to one of the first set of point-to-point connections, a first flow control unit including a first data switch coupled to the first set of point-to-point connections, a second set of point-to-point connections, a second set of processors each coupled to one of the second set of point-to-point connections, a second memory coupled to one of the second set of point-to-point connections, a second flow control unit including a second data switch coupled to the second set of point-to-point connections, and a third point-to-point connection coupled to the first data switch and to the second data switch, the method comprising:
-
interconnecting the first set of point-to-point connections in the first data switch to provide first data paths between the first memory and the first set of processors;
interconnecting the second set of point-to-point connections in the second data switch to provide second data paths between the second memory and the second set of processors; and
interconnecting the first set of point-to-point connections to the third point-to-point connection in the first data switch and interconnecting the second set of point-to-point connections to the third point-to-point connection in the second data switch to provide third data paths between the second memory and the first set of processors and between the first memory and the second set of processors. - View Dependent Claims (7, 8, 9, 10)
in the first flow control unit, maintaining cache coherency between the first memory, the first caches, and the second caches; and
in the second flow control unit, maintaining cache coherency between the second memory, the first caches, and the second caches.
-
-
8. The method of claim 7 wherein maintaining the cache coherency comprises:
-
in the first flow control unit, maintaining first duplicate tags for the first caches; and
in the second flow control unit, maintaining second duplicate tags for the second caches.
-
-
9. The method of claim 6 further comprising:
-
in the first flow control unit, providing a first system serialization point for the first memory; and
in the second flow control unit, providing a second system serialization point for the second memory.
-
-
10. The method of claim 6 further comprising transferring packets that indicate one of the first memory and the second memory from the first set of processors and the second set of processors.
Specification