Flow pinning in a server on a chip
First Claim
1. A server on a chip, comprising:
- a first data structure, executed by a processor, configured for extracting a metadata string from a packet;
a second data structure, executed by the processor, configured for associating the packet with a result database based on the metadata string; and
an Ethernet direct memory access engine configured for assigning the packet to a queue based on the result database, wherein the queue is associated with a respective core of a multiprocessor, the Ethernet direct memory access engine further configured for enqueuinq a descriptor message in the queue, and the descriptor message comprises data that indicates a presence of the packet and a location of the packet in a memory.
6 Assignments
0 Petitions
Accused Products
Abstract
Various embodiments provide for a system on a chip or a server on a chip that performs flow pinning, where packets or streams of packets are enqueued to specific queues, wherein each queue is associated with a respective core in a multiprocessor/multi-core system or server on a chip. With each stream of packets, or flow, assigned to a particular processor, the server on a chip can process and intake packets from multiple queues from multiple streams from the same single Ethernet interface in parallel. Each of the queues can issue interrupts to their assigned processors, allowing each of the processors to receive packets from their respective queues at the same time. Packet processing speed is therefore increased by receiving and processing packets in parallel for different streams.
11 Citations
20 Claims
-
1. A server on a chip, comprising:
-
a first data structure, executed by a processor, configured for extracting a metadata string from a packet; a second data structure, executed by the processor, configured for associating the packet with a result database based on the metadata string; and an Ethernet direct memory access engine configured for assigning the packet to a queue based on the result database, wherein the queue is associated with a respective core of a multiprocessor, the Ethernet direct memory access engine further configured for enqueuinq a descriptor message in the queue, and the descriptor message comprises data that indicates a presence of the packet and a location of the packet in a memory. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A computer implemented method for performing flow pinning of a packet stream to a core of a multiprocessor, comprising:
-
extracting, by a processor executing a first data structure, a metadata string from a packet of the packet stream; associating, by the processor executing a second data structure, the packet with a respective result database based on the metadata string; assigning, by an Ethernet direct memory access engine, the packet to a queue based on the result database, wherein the queue is associated with a respective core of a multiprocessor; and enqueuing, by the Ethernet direct memory access engine, a descriptor message in the queue, the descriptor message comprising information that indicates a presence of the packet and a location of the packet in a memory. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
-
20. A server on a chip, comprising:
-
means for extracting a metadata string from a packet; means for associating the packet with a respective core of a multiprocessor based on the metadata string; means for assigning the packet to a queue associated with the processor; and means for enqueuing a descriptor message in the queue, the descriptor message comprising data that indicates a presence of the packet and a location of the packet in a memory.
-
Specification