Method and apparatus for the management of queue pointers by multiple processors in a digital communications network
First Claim
1. A service module, comprising:
- a buffer queue that stores a data queue, the buffer queue comprising a plurality of buffers being linked to one another as a linked list using next pointers;
a first processor including a first cache coupled to the buffer queue; and
a second processor including a second cache coupled to the buffer queue, wherein the first processor is configured to store a head pointer of the buffer queue in the first cache and the second processor is configured to store a tail pointer of the buffer queue in the second cache, the first cache being independent of the second cache.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for managing a buffer queue that stores a data queue, wherein the data queue comprises a set of n data elements, n being at least zero. A head pointer is stored at a first location, which may be in a cache controlled by a first processor. The head pointer indicates a head buffer of the buffer queue. The first processor reads the head pointer to determine the head buffer of the buffer queue when a data element is to be removed from the data queue. The first processor reads a next pointer of the head buffer to determine whether the data queue is empty. The first processor determines that the data queue is empty when the next pointer has a first value, which indicates that the head buffer is a dummy buffer.
-
Citations
21 Claims
-
1. A service module, comprising:
-
a buffer queue that stores a data queue, the buffer queue comprising a plurality of buffers being linked to one another as a linked list using next pointers;
a first processor including a first cache coupled to the buffer queue; and
a second processor including a second cache coupled to the buffer queue, wherein the first processor is configured to store a head pointer of the buffer queue in the first cache and the second processor is configured to store a tail pointer of the buffer queue in the second cache, the first cache being independent of the second cache. - View Dependent Claims (2)
-
-
3. A circuit, comprising;
-
a buffer queue that stores a data queue;
a first processor coupled to the buffer pool; and
a second processor coupled to the buffer pool, wherein the first processor includes a first memory location that stores a head pointer of the buffer queue and the second processor includes a second memory location that stores a tail pointer of the buffer queue, the head pointer and the tail pointer always pointing to a buffer, the first memory location being independent of the second memory location. - View Dependent Claims (4, 5)
-
-
6. A method for managing a buffer queue that stores a data queue, wherein the data queue comprises a set of n data elements, n being at least zero, the method comprising the steps of:
-
storing a head pointer at a first location, the head pointer indicating a head buffer of the buffer queue;
storing a tail pointer in a second location, the tail pointer indicating a tail buffer of the buffer queue;
reading the head pointer with a first processor to determine the head buffer of the buffer queue when a data element is to be removed from the data queue, the buffer queue comprising a plurality of buffers for storing the data elements, the buffers being linked to one another as a linked list using next pointers;
reading a next pointer of the head buffer with the first processor;
using the first processor to set the head pointer to point to a new head buffer when the next pointer has a first value indicative of the new head buffer, to read a first data element of the data queue stored by the new head buffer, and to maintain the head pointer to indicate the new head buffer, otherwise determining that the data queue is empty when the next pointer has a second value which indicates that the head buffer is a dummy buffer;
adding a new data element to the data queue with a second processor by adding a new tail buffer to the buffer queue, the new tail buffer storing the new data element; and
updating the tail pointer using the second processor to indicate the new tail buffer as being the tail buffer. - View Dependent Claims (7, 8, 9, 10, 11)
initially determining whether the data queue is empty; and
reading using the first processor a first data element stored by the head buffer prior to the step of reading the next pointer of the head buffer.
-
-
8. The method of claim 6, wherein the next pointer is stored as part of the head buffer, the step of reading the next pointer comprising the step of reading the head buffer.
-
9. The method of claim 6, wherein the step of storing the head pointer in a first location comprises the step of storing the head pointer in a first cache memory controlled by the first processor.
-
10. The method of claim 6, wherein the first processor sets an empty flag to indicate that the data queue is empty upon determining that the data queue is empty.
-
11. The method of claim 6, wherein the step of storing the tail pointer in a second location comprises the step of storing the tail pointer in a second cache memory controlled by the second processor.
-
12. A circuit for managing a buffer queue that stores a data queue comprising a set of n data elements, the circuit comprising:
-
a first memory storing a head pointer indicating a head data element of the queue;
a first processor coupled to the first memory for removing data elements from the data queue, for reading the head pointer to determine a head buffer of the buffer queue when a data element is to be removed from the data queue, for reading a next pointer of the head buffer, for determining that the data queue is empty when the next pointer has a first value which indicates that the head buffer is a dummy buffer, for setting the head pointer to point to a new head buffer when the next pointer has a second value indicative of the new head buffer, for reading a first data element of the data queue stored by the new head buffer, and for maintaining the head pointer to indicate the new head buffer;
a second memory for storing a tail pointer, the tail pointer indicating a tail data element of the queue; and
a second processor for adding a new data element to the queue, for updating the tail pointer to indicate a new tail buffer that stores the new data element as being the tail buffer, wherein the first processor updates the head pointer to indicate the new data element as being the head data element in response to detecting that the next pointer has the second value, wherein the buffer queue comprises a plurality of buffers for storing the data elements, the buffers being linked to one another as a linked list using next pointers. - View Dependent Claims (13, 14)
-
-
15. A circuit for managing a buffer queue for storing a data queue comprising a set of n data elements, wherein n is at least zero, the circuit comprising:
-
a buffer pool including a plurality of buffers being linked to one another as a linked list using next pointers;
a first processor coupled to the buffer pool;
a first memory location coupled to and accessed only by the first processor, the first memory location storing a head pointer indicative of a head buffer of the buffer queue, wherein the head pointer always points to a buffer;
a second processor coupled to the buffer pool;
a second memory location coupled to and accessed only by the second processor, the second memory location storing a tail pointer indicative of a tail buffer of the buffer queue, wherein the second processor adds buffers of the buffer pool to the buffer queue and the first processor removes buffers from the buffer queue and returns them to the buffer pool. - View Dependent Claims (16, 17, 18)
-
-
19. A service module comprising:
-
buffering means for storing a data queue, the buffering means comprising a plurality of buffers being linked to one another as a linked list using next pointers;
first processor means including a first cache coupled to the buffering means; and
second processor means including a second cache coupled to the buffering means, wherein the first processor means is configured to store a head pointer of the buffering means in the first cache and the second processor means is configured to store a tall pointer of the buffering means in the second cache, the first cache being independent of the second cache. - View Dependent Claims (20)
-
-
21. A circuit, comprising:
-
buffering means for storing a data queue;
first processor means coupled to the buffering means; and
second processor means coupled to the buffering means, wherein the first processor means includes a first memory means that stores a head pointer of the buffering means and the second processor means includes a second memory means that stores a tail pointer of the buffering means, the head pointer and the tail pointer always pointing to a buffer, the first memory means being independent of the second memory means.
-
Specification