Traffic switching using multi-dimensional packet classification
First Claim
1. A system for conveying an arbitrary mixture of high and low latency traffic streams across a common switch fabric, the system comprising:
- at least two diverse paths mapped though the switch fabric from a common input interface to a common output interface, each path being optimized to satisfy respective different traffic latency requirements;
a latency classifier adapted to route each one of the traffic streams received at the input interface to one of the at least two diverse paths based upon a latency requirement of each traffic stream most closely matching the respective traffic latency of each of the at least two diverse paths;
at least two prioritization classifiers, each one of the prioritization classifiers associated with one of the at least two diverse paths, each prioritization classifier independently prioritizes traffic being conveyed through the respective path, andwherein each one of the traffic streams received at the common input interface is routed to one of the at least two diverse paths by the latency classifier and each of the at least two diverse paths are processed independently by the respective prioritization classifiers before transport through the switch fabric to the common output interface.
8 Assignments
0 Petitions
Accused Products
Abstract
A method and system for conveying an arbitrary mixture of high and low latency traffic streams across a common switch fabric implements a multi-dimensional traffic classification scheme, in which multiple orthogonal traffic classification methods are successively implemented for each traffic stream traversing the system. At least two diverse paths are mapped through the switch fabric, each path being optimized to satisfy respective different latency requirements. A latency classifier is adapted to route each traffic stream to a selected path optimized to satisfy latency requirements most closely matching a respective latency requirement of the traffic stream. A prioritization classifier independently prioritizes traffic streams in each path. A fairness classifier at an egress of each path can be used to enforce fairness between responsive and non-responsive traffic streams in each path. This arrangement enables traffic streams having similar latency requirements to traverse the system through a path optimized for those latency requirements.
-
Citations
24 Claims
-
1. A system for conveying an arbitrary mixture of high and low latency traffic streams across a common switch fabric, the system comprising:
-
at least two diverse paths mapped though the switch fabric from a common input interface to a common output interface, each path being optimized to satisfy respective different traffic latency requirements; a latency classifier adapted to route each one of the traffic streams received at the input interface to one of the at least two diverse paths based upon a latency requirement of each traffic stream most closely matching the respective traffic latency of each of the at least two diverse paths; at least two prioritization classifiers, each one of the prioritization classifiers associated with one of the at least two diverse paths, each prioritization classifier independently prioritizes traffic being conveyed through the respective path, and wherein each one of the traffic streams received at the common input interface is routed to one of the at least two diverse paths by the latency classifier and each of the at least two diverse paths are processed independently by the respective prioritization classifiers before transport through the switch fabric to the common output interface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A method of conveying an arbitrary mixture of high and low latency traffic streams across a common switch fabric, the method comprising a the steps of:
-
mapping at least two diverse paths through the switch fabric from a common input interface to a common output interface, each path being optimized to satisfy respective different traffic latency requirements; routing each traffic stream received at the common input interface to a selected one of the at least two diverse paths, the selected path being optimized to satisfy latency requirements most closely matching a respective latency of the traffic stream; and processing traffic in each of the at least two diverse paths by independently prioritizing the traffic to be conveyed through the switch fabric by each respective path to a common output interface. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23, 24)
-
Specification