Interprocess communications within a network node using switch fabric
First Claim
Patent Images
1. A method of using a switch fabric to communicate control data between different processes at a network node, comprising the steps of:
- connecting a first processor to a switch fabric, using a first switch fabric interface;
connecting a second processor to a switch fabric, using a second switch fabric interface; and
using the switch fabric interface to recognize data units as containing data messages or control messages.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods are provided for network connected computing systems that employ functional multi-processing to optimize bandwidth utilization and accelerate system performance. In one embodiment, the network connected computing system may include a switch based computing system. The switch employed in the system may be a switch fabric. The system may further include an asymmetric multi-processor system configured in a staged pipeline manner. The network connected computing system may be utilized in one embodiment as a network endpoint system that provides content delivery.
150 Citations
135 Claims
-
1. A method of using a switch fabric to communicate control data between different processes at a network node, comprising the steps of:
-
connecting a first processor to a switch fabric, using a first switch fabric interface;
connecting a second processor to a switch fabric, using a second switch fabric interface; and
using the switch fabric interface to recognize data units as containing data messages or control messages. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A switch fabric interface for connecting a processor to a switch fabric at a network node, comprising:
-
a physical interface for connecting the switch fabric to a communications medium;
a bus interface for connecting the switch fabric to the processor; and
a logic unit for differentiating data units containing data messages from data units containing control messages. - View Dependent Claims (11, 12, 13, 14, 16, 17, 18, 19, 20)
-
-
15. A network node system for processing network data transmitted and received via a network, comprising:
-
a first processor programmed to receive and transmit network data on the public network;
a second processor programmed to communicate network data to and from the first processor;
a switch fabric interface associated with each processor, the switch fabric interface having a bus interface at the processor side and a physical interface at the switch fabric side; and
a switch fabric for directly connecting the network processor to the processing unit.
-
-
21. A network endpoint system for processing network data transmitted and received via a network, comprising:
-
a network processor programmed to receive and transmit network data on the public network;
at least one processing unit programmed to communicate network data to and from the network processor;
a switch fabric interface associated with each processing unit, the switch fabric interface having a bus interface at the processing unit side and a physical interface at the switch fabric side; and
a switch fabric for directly connecting the network processor to the processing unit. - View Dependent Claims (22, 23, 24, 25, 26, 27, 29, 30, 31)
-
-
28. A method for processing network data at a network endpoint system, comprising the steps of:
-
using a network processor at the front end of the system to receive network data;
using one or more processing units to receive data from the network processor and to execute network applications programming; and
communicating the data from the network processor to the processing units with a switch fabric.
-
-
32. A network endpoint system, comprising:
-
at least one system processor performing endpoint functionality processing;
a system interface connection configured to be coupled to a network;
at least one network processor, the network processor coupled to the system interface connection to receive data from the network; and
a switch fabric coupled between the system processor and the network processor so that the network processor may analyze data provided from the network and process the data at least in part and then forward the data to the interconnection so that other processing may be performed on the data within the system. - View Dependent Claims (33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43)
-
-
44. A method of operating a network endpoint system, the method comprising:
-
providing a network processor within the network endpoint system, the network processor being configured to be coupled to an interface which couples the network endpoint system to a network;
processing data passing through the interface with the network processor; and
forwarding data from the network processor to a system processor through a switch fabric;
performing at least some endpoint functionality upon the data within the system processor. - View Dependent Claims (45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 79, 80, 81, 82, 83, 84, 85, 91)
-
-
58. A network endpoint system, comprising:
-
a first processor engine, the first processor engine configured to receive data from a network;
a second processor engine, the second processor engine performing at least some endpoint functionality, the first processor engine performing tasks different from the endpoint functionality tasks performed by the second processor engine; and
an interconnect coupling the first and second processor engines, wherein the network endpoint system is configure in at least one manner to provide accelerated performance.
-
-
78. A method of providing a network endpoint termination through the use of a network endpoint system, comprising:
-
providing a plurality of separate processor engines, the processor engines being assigned separate tasks in an asymmetrical multi-processor configuration;
providing an interface connection to at least one of the processor engines to couple the network endpoint system to a network;
communicating between the plurality of separate processor engines through a switch fabric having fixed latencies, the plurality of separate processors and the switch fabric being formed in a single chassis; and
generating an accelerated data flow through the network endpoint system.
-
- 86. The method of 84, wherein the network processor is contained with a network interface engine, the other processing engines comprising a storage processor engine and an application processor engine.
-
92. A method of providing a content delivery system through the use of a network connectable computing system, comprising:
-
providing a plurality of separate processor engines, the processor engines being assigned separate tasks in an asymmetrical multi-processor configuration;
providing a storage processor engine, the storage processor engine being one of the plurality of separate processor engines;
providing a switch fabric for communication between the plurality of separate processor engines and the storage processor engine;
providing a network interface connection to at least one of the processor engines to couple the content delivery system to a network;
providing a storage interface connection to the storage processor engine to couple the storage processor engine to a content storage system; and
accelerating content delivery through the network endpoint system. - View Dependent Claims (93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103)
-
-
104. A network connectable computing system, comprising:
-
a first processor engine;
a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine;
a third processor engine, the third processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines; and
a switch fabric coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection. - View Dependent Claims (105, 106, 107, 108, 109, 110)
-
-
111. A network connectable content delivery system, comprising:
-
a first processor engine;
a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine;
a storage processor engine, the storage processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines, the storage processor engine being configured to be coupled to a content storage system; and
a switch fabric coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the distributed interconnection. - View Dependent Claims (112, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135)
-
-
118. A network connectable content delivery system, comprising:
-
a first processor engine;
a second processor engine, the second processor engine being assigned types of tasks different from the types of tasks assigned to the first processor engine;
a storage processor engine, the storage processor engine being assigned types of tasks that are different from the types of tasks assigned to the first and second processor engines, the storage processor engine being configured to be coupled to a content storage system; and
a switch fabric coupled to the first, second and third processor engines, the tasks of the first, second and third processor engines being assigned such that the system operates in staged pipeline manner through the switch fabric, wherein the first processor engine, second processor engine, storage processor engine and the switch fabric are all contained within a single chassis, and wherein at least one of the first or second processor engines performs system management functions so as to off-load management functions from the other processor engines.
-
Specification