Real-time channel-based reflective memory based upon timeliness requirements
First Claim
1. A method of distributing information over a communications network, comprising the steps of:
- writing information to a first memory located at a first node on a communications network;
associating a second memory with the first memory, the second memory being located at a second node, different than the first node, on the communication network;
associating a timeliness requirement with the second memory, wherein associating a required number of histories of the written information with the second memory, wherein the second memory is a circular buffer, that is sized to have said circular buffer to have a number of buffers equal to the required number of histories plus two in order to avoid locking of said circular buffer due to conflicting reads, conflicting writes, or a conflicting read and write;
establishing a process at the first node for pushing the information from the first memory to the second memory in accordance with the associated timeliness requirement;
establishing a channel over the communications network between the first memory and the second memory;
pushing the written information from the first memory to the second memory in accordance with the established process via the channel.
0 Assignments
0 Petitions
Accused Products
Abstract
A computer network guarantees timeliness to distributed real-time applications by allowing an application to specify its timeliness requirements and by ensuring that a data source can meet the specified requirements. A reflective memory area is established by either a data source of an application. A data source maps onto this reflective memory area and writes data into it. In order to receive data from this data source, an application requests attachment to the reflective memory area to which the data source is mapped and specifies timeliness requirements. The application may specify that it needs data either periodically or upon occurrence of some condition. The application allocates buffers at its local node to receive data. The data source then establishes a data push agent thread at its local node, and a virtual channel over the computer network between the data push agent thread and the application attached to its reflective memory area. The data push agent thread transmits data to the application over the virtual channel according to the timeliness requirements specified by the application. Such a channel-based reflective memory system simplifies data sharing and communication by utilizing the typically unidirectional pattern of data sharing and communication. For example, plant data typically is sent from a plant controller to an operator station, and control data typically is sent from an operator station to a plant controller. Additionally, a single writer, multiple reader model of communication is typically sufficient. That is, all of the data does not need to be transmitted to all of the nodes in a computer network all of the time. Thus, flexibility, switchability and scalability are provided by using channels between reader and writer groups. Scalability is provided by using channels to control data reflection and to represent the unidirectional access pattern. By using an asynchronous transfer mode network, flexibility in channel establishment and cost reduction may be achieved.
-
Citations
35 Claims
-
1. A method of distributing information over a communications network, comprising the steps of:
-
writing information to a first memory located at a first node on a communications network;
associating a second memory with the first memory, the second memory being located at a second node, different than the first node, on the communication network;
associating a timeliness requirement with the second memory, wherein associating a required number of histories of the written information with the second memory, wherein the second memory is a circular buffer, that is sized to have said circular buffer to have a number of buffers equal to the required number of histories plus two in order to avoid locking of said circular buffer due to conflicting reads, conflicting writes, or a conflicting read and write;
establishing a process at the first node for pushing the information from the first memory to the second memory in accordance with the associated timeliness requirement;
establishing a channel over the communications network between the first memory and the second memory;
pushing the written information from the first memory to the second memory in accordance with the established process via the channel. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
selecting one of pushing the written information (i) responsive to passage of a particular period of time since last pushing information from the first memory to the second memory, (ii) responsive to the writing of the information in the first memory, and (iii) responsive to an occurrence of a particular condition, as the timeliness of distribution requirement unrelated to the writing of the information to the first memory.
-
-
7. A method according to claim 6, further comprising the step of:
identifying a particular period as a maximum delay period for the pushing of the written information.
-
8. A method according to claim 1, wherein the process includes a data push agent thread.
-
9. A method according to claim 8, further comprising the steps of:
-
associating a third memory with the first memory, the third memory being located at a third node, different than the first node and the second node, on the communications network;
associating a timeliness requirement with the third memory;
multiplexing the established process at the first node to push the information from the first memory to the third memory in accordance with the timeliness requirement associated with the third memory;
establishing another channel over the communications network between the first memory and the third memory; and
pushing the information from the first memory to the third memory in accordance with the multiplexed established process via the other channel.
-
-
10. A method according to claim 9, wherein:
-
the timeliness requirement associated with the third memory is different than the timeliness requirement associated with the second memory; and
the multiplexed process at the first node pushes the information from the first memory to the third memory and from the first memory to the second memory asynchronously.
-
-
11. A method according to claim 9, wherein:
-
the timeliness requirement associated with the third memory is the same as the timeliness requirement associated with the second memory; and
the multiplexed process at the first node pushes the information from the first memory to the third memory and from the first memory to the second memory synchronously.
-
-
12. A method according to claim 1, wherein the established process at the first node is further established to read the written information from the first memory, and further comprising the step of;
-
reading the written information from the first memory using the established process;
wherein the read information is pushed from the first memory to the second memory in accordance with the established process via the channel.
-
-
13. A method according to claim 1, wherein:
-
the information is written to a particular area of the first memory having a first address;
the written information is pushed to a particular area of the second memory having a second address.
-
-
14. A method according to claim 1, further comprising the step of:
establishing a definition table for the first memory having a respective identifier of each defined reflective memory area of the first memory, a respective identifier of each network node authorized to write information to any defined memory area, each respective write node identifier being associated with the respective identifier of each defined memory area to which it is authorized to write, respective update periods each representing a different time period at which one or more of the defined memory areas are to be updated, each respective update period being associated with the respective identifier of each defined memory area having the represented update time period, and a respective identifier of each network node authorized to read information from one or more of the defined memory areas, each respective read node identifier being associated with the respective identifier of each defined memory area from which it is authorized to read.
-
15. A method according to claim 1, wherein the second memory includes a plurality of buffers arranged in a logically circular disposition, each of the plurality of buffers having a respective index number I, where I={1,2, . . . ,N} and further comprising the steps of:
-
selecting a first of the plurality of buffers to which the information is to be written, the first buffer having an index number I1; and
selecting a second for the plurality of buffers from which previously stored information is to be read, the second buffer having an index number I2=(I1+2) modN.
-
-
16. A method according to claim 15, further comprising the step of:
-
selecting a third of the plurality of buffers from which the information is to be read, the third buffer having an index number I3=(I1−
1) modN; and
reading the third buffer after reading the second buffer.
-
-
17. The method according to claim 16, wherein:
-
N=3; and
I2=I3.
-
-
18. A method according to claim 1, wherein the second memory includes a plurality of buffers arranged in a logically circular disposition, each of the plurality of buffers having a respective index number I, where I={1,2, . . . ,N} further comprising the steps of:
-
selecting a first of the plurality of buffers to which the information is to be written, the first buffer having an index number I1; and
selecting a second for the plurality of buffers to which an update of the information is to be written, the second buffer having an index number I2=(I1+1)modN.
-
-
19. A method according to claim 1, further comprising the steps of:
-
writing other information to the first memory;
writing the pushed information in its entirety in the second memory; and
selecting one of (i) reading the pushed information written to the second memory after the other information is written to the first memory and prior to the other information being pushed to the second memory and (ii) blocking the reading of the information written to the second memory after the other information is written to the first memory and prior to the other information being pushed to the second memory.
-
-
20. A system for distributing information, comprising:
-
a communication network connecting a plurality of network nodes;
a first memory located at a first of the plurality of nodes and configured to have information written thereto in accordance with a first timeliness requirement;
a second memory located at a second of the plurality of nodes, different than the first node, and configured to have the information written thereto in accordance with a second timeliness requirement, wherein associating a required number of histories of the written information with the second memory, wherein the second memory is a circular buffer, that is sized to have said circular buffer to have a number of buffers equal to the required number of histories plus two in order to avoid locking of said circular buffer due to conflicting reads, conflicting writes, or a conflicting read and write;
reflective memory located at a third of the plurality of nodes, different than the first node and the second node, and configured to have the information written thereto;
a server associated with the reflective memory and configured to establish a first channel over the network between the reflective memory and the first memory, to establish a second channel over the network between the reflective memory and the second memory, to push the information from the reflective memory to the first memory via the first channel in accordance with the first timeliness requirement, and to push the information from the reflective memory to the second memory via the second channel in accordance with the second timeliness requirement. - View Dependent Claims (21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35)
the first memory is further configured to have the information written thereto in accordance with a first quality of service requirement which includes the first timeliness requirement;
the second memory is further configured to have the information written thereto in accordance with a second quality of service requirement which includes the second timeliness requirement; and
configured to have the information written thereto; and
the server is further configured to push the information from the reflective memory to the first memory via the first channel in accordance with the first quality of service requirement, and to push the information from the reflective memory to the second memory via the second channel in accordance with the second quality of service requirement.
-
-
24. A system according to claim 20, further comprising:
-
a third memory located at a fourth of the plurality of nodes, different than the first node, the second node and the third node, and configured to have the information written thereto in accordance with a third timeliness requirement;
wherein the server is further configured to establish a third channel over the network between the reflective memory and the third memory, and to push the information from the reflective memory to the third memory via third channel in accordance with the third timeliness requirement;
wherein the first timeliness requirement requires the information to be pushed from the reflective memory to the first memory responsive to passage of a particular period of time since a last pushing of information from the reflective memory to the first memory in accordance with the first timeliness requirement;
wherein the second timeliness requirement requires the information to be pushed from the reflective memory to the second memory responsive to the writing of the information to the reflective memory; and
wherein the third timeliness requirement requires the information to be pushed from the reflective memory to the third memory responsive to an occurrence of a particular condition unrelated to the writing of the information to the reflective memory.
-
-
25. A system according to claim 20, wherein:
-
the first memory is further configured to have the information written thereto in accordance with a first quality of service requirement which includes the first timeliness requirement; and
the quality of service requirement includes a required number of histories of the information be written to the first memory.
-
-
26. A system according to claim 25, wherein the first memory is a circular buffer having a number of buffers equal to the required number of histories plus two.
-
27. A system according to claim 20, wherein the server pushes the information under the direction of a data push agent thread.
-
28. A system according to claim 20, wherein:
the server is further configured to push the information from the reflective memory to the first memory via the first channel and to the second memory via the second channel by multiplexing a communication.
-
29. A system according to claim 28, wherein:
-
the first timeliness requirement is different than the second timeliness requirement; and
the server multiplexes the communication so that the information is pushed from the reflective memory to the first memory and from the reflective memory to the second memory asynchronously.
-
-
30. A method according to claim 28, wherein:
-
the first timeliness requirement is the same as the second timeliness requirement; and
the server multiplexes the communication so that the information is pushed from the reflective memory to the first memory and from the reflective memory to the second memory synchronously.
-
-
31. A system according to claim 20, wherein:
the server includes a definition table for the reflective memory having a respective identifier of each defined reflective memory area, a respective identifier of each of the plurality of network nodes authorized to write information to any defined memory area, each respective write node identifier being associated with the respective identifier of each defined memory area to which it is authorized to write, respective update periods each representing a different time period at which one or more of the defined memory areas are to be updated, each respective update period being associated with the respective identifier of each defined memory area having the represented update time period, and a respective identifier of each of the plurality of network nodes authorized to read information from one or more of the defined memory areas, each respective read node identifier being associated with the respective identifier of each defined memory area from which it is authorized to read.
-
32. A system according to claim 20, wherein:
-
the first memory includes a plurality of buffers arranged in a logically circular disposition, each of the plurality of buffers having a respective index number I, where I={1,2, . . . ,N}; and
a first of the plurality of buffers to which the information is to be written has an index number I1; and
a second for the plurality of buffers from which a reading of previously written information is to start has an index number I2=(I1+2) modN.
-
-
33. A system according to claim 32, wherein:
a third of the plurality of buffers at which a reading of the previously written information is to end has an index number I3(I1=1) modN.
-
34. A system according to claim 33, wherein:
-
N=3; and
I2=I3.
-
-
35. A system according to claim 20, wherein:
-
the second memory includes a plurality of buffers arranged in a logically circular disposition, each of the plurality of buffers having a respective index number I, where I={1,2, . . . ,N};
a first of the plurality of buffers to which the pushed information is written has an index number I1; and
a second for the plurality of buffers to which an update of the pushed information is to be written has an index number I2=(I1+1)modN.
-
Specification