Novel massively parallel supercomputer
First Claim
1. A massively parallel computing structure comprising:
- a plurality of processing nodes interconnected by multiple independent networks, each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations; and
, said multiple independent networks comprising networks for enabling point-to-point, global tree communications and global barrier and notification operations among said nodes or independent partitioned subsets thereof, wherein combinations of said multiple independent networks interconnecting said nodes are collaboratively or independently utilized according to bandwidth and latency requirements of an algorithm for optimizing algorithm processing performance.
1 Assignment
0 Petitions
Accused Products
Abstract
A novel massively parallel supercomputer of hundreds of teraOPS-scale includes node architectures based upon System-On-a-Chip technology, i.e., each processing node comprises a single Application Specific Integrated Circuit (ASIC). Within each ASIC node is a plurality of processing elements each of which consists of a central processing unit (CPU) and plurality of floating point processors to enable optimal balance of computational performance, packaging density, low cost, and power and cooling requirements. The plurality of processors within a single node may be used individually or simultaneously to work on any combination of computation or communication as required by the particular algorithm being solved or executed at any point in time. The system-on-a-chip ASIC nodes are interconnected by multiple independent networks that optimally maximizes packet communications throughput and minimizes latency. In the preferred embodiment, the multiple networks include three high-speed networks for parallel algorithm message passing including a Torus, Global Tree, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. For particular classes of parallel algorithms, or parts of parallel calculations, this architecture exhibits exceptional computational performance, and may be enabled to perform calculations for new classes of parallel algorithms. Additional networks are provided for external connectivity and used for Input/Output, System Management and Configuration, and Debug and Monitoring functions. Special node packaging techniques implementing midplane and other hardware devices facilitates partitioning of the supercomputer in multiple networks for optimizing supercomputing resources.
321 Citations
52 Claims
-
1. A massively parallel computing structure comprising:
-
a plurality of processing nodes interconnected by multiple independent networks, each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations; and
,said multiple independent networks comprising networks for enabling point-to-point, global tree communications and global barrier and notification operations among said nodes or independent partitioned subsets thereof, wherein combinations of said multiple independent networks interconnecting said nodes are collaboratively or independently utilized according to bandwidth and latency requirements of an algorithm for optimizing algorithm processing performance. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43)
-
-
44. A scalable, massively parallel computing structure comprising:
-
a plurality of processing nodes interconnected by independent networks, each node including one or more processing elements for performing computation or communication activity as required when performing parallel algorithm operations; and
,a first independent network comprising an n-dimensional torus network including communication links interconnecting said nodes in a manner optimized for providing high-speed, low latency point-to-point and multicast packet communications among said nodes or sub-sets of nodes of said network;
a second of said multiple independent networks includes a scalable global tree network comprising nodal interconnections that facilitate simultaneous global operations among nodes or sub-sets of nodes of said network; and
,partitioning means for dynamically configuring one or more combinations of independent processing networks according to needs of one or more algorithms, each independent network including a configurable sub-set of processing nodes interconnected by divisible portions of said first and second networks, wherein each of said configured independent processing networks is utilized to enable simultaneous collaborative processing for optimizing algorithm processing performance. - View Dependent Claims (45, 46, 47)
-
-
48. In a massively parallel computing structure comprising a plurality of processing nodes interconnected by multiple independent networks, each processing node comprising:
-
a system-on-chip Application Specific Integrated Circuit (ASIC) comprising two or more processing elements each capable of performing computation or message passing operations;
means enabling rapid coordination of processing and message passing activity at each said processing element, wherein one or both of the processing elements performs calculations needed by the algorithm, while the other or both processing element performs message passing activities for communicating with other nodes of said network, as required when performing particular classes of algorithms.
-
-
49. A scalable, massively parallel computing system comprising:
-
a plurality of processing nodes interconnected by links to form a torus network, each processing node being connected by a plurality of links including links to all adjacent processing nodes;
communication links for interconnecting said processing nodes to form a global combining tree network, and a similar combining tree for communicating global signals including interrupt signals;
link means for receiving signals from said torus and global tree networks, and said global interrupt signals, for redirecting said signals between different ports of the link means to enable the computing system to be partitioned into multiple, logically separate computing systems. - View Dependent Claims (50, 51)
-
-
52. A massively parallel computing system comprising:
-
a plurality of processing nodes interconnected by independent networks, each processing node comprising a system-on-chip Application Specific Integrated Circuit (ASIC) comprising two or more processing elements each capable of performing computation or message passing operations;
a first independent network comprising an n-dimensional torus network including communication links interconnecting said nodes in a manner optimized for providing high-speed, low latency point-to-point and multicast packet communications among said nodes or sub-sets of nodes of said network;
a second of said multiple independent networks includes a scalable global tree network comprising nodal interconnections that facilitate simultaneous global operations among nodes or sub-sets of nodes of said network; and
,partitioning means for dynamically configuring one or more combinations of independent processing networks according to needs of one or more algorithms, each independent network including a configured sub-set of processing nodes interconnected by divisible portions of said first and second networks, and, means enabling rapid coordination of processing and message passing activity at each said processing element in each independent processing network, wherein one, or both, of the processing elements performs calculations needed by the algorithm, while the other, or both, of the processing elements performs message passing activities for communicating with other nodes of said network, as required when performing particular classes of algorithms wherein each of said configured independent processing networks and node processing elements thereof are dynamically utilized to enable collaborative processing for optimizing algorithm processing performance.
-
Specification