Dynamically processing an event using an extensible data model
First Claim
Patent Images
1. A method for dynamically processing an event including a dataset that is streamed from a source to a sink via nodes, the method comprising:
- recording, at a respective node, the event including the dataset in a memory of the respective node using a data model;
extracting key-value data from the dataset; and
annotating the event by adding or updating one or more attributes associated with the event in the data model based on analyzing the extracted key-value data, the data model being extensible to add additional attributes to the event by a subsequent node which is configured to further process the dataset as the event is streamed from the source to the sink; and
specifying based on the key-value data, one or more fields of the event in the data model, wherein the one or more fields include one or more of;
a timestamp field, a source machine field, a body field, or a priority field.
5 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods of dynamically processing an event using an extensible data model are disclosed. One embodiment includes, specifying attributes of the event in a data model; the data model being extensible to add properties to the event as the dataset is streamed from the source to the sink.
169 Citations
20 Claims
-
1. A method for dynamically processing an event including a dataset that is streamed from a source to a sink via nodes, the method comprising:
-
recording, at a respective node, the event including the dataset in a memory of the respective node using a data model; extracting key-value data from the dataset; and annotating the event by adding or updating one or more attributes associated with the event in the data model based on analyzing the extracted key-value data, the data model being extensible to add additional attributes to the event by a subsequent node which is configured to further process the dataset as the event is streamed from the source to the sink; and specifying based on the key-value data, one or more fields of the event in the data model, wherein the one or more fields include one or more of;
a timestamp field, a source machine field, a body field, or a priority field. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A non-transitory computer readable medium storing a plurality of instructions which, upon execution by a processor, cause the processor to perform a method for dynamically processing an event including a dataset that is streamed from a source to a sink via nodes, the method comprising:
-
recording, at a respective node, the event including the dataset in a memory of the respective node using a data model; extracting key-value data from the dataset; and annotating the event by adding or updating one or more attributes associated with the event in the data model based on analyzing the extracted key-value data, the data model being extensible to add additional attributes to the event by a subsequent node which is configured to further process the dataset as the event is streamed from the source to the sink; and specifying based on the key-value data, one or more fields of the event in the data model, wherein the one or more fields include one or more of;
a timestamp field, a source machine field, a body field, or a priority field. - View Dependent Claims (10, 11, 12, 13)
-
-
14. A system having a processor and a memory, the memory storing a plurality of instructions which, when executed by the processor, cause the processor to perform a method for dynamically processing an event including a dataset that is streamed from a source to a sink via nodes, the method comprising:
-
recording, at a respective node, the event including the dataset in a memory of the respective node using a data model; extracting key-value data from the dataset; and annotating the event by adding or updating one or more attributes associated with the event in the data model based on analyzing the extracted key-value data, the data model being extensible to add additional attributes to the event by a subsequent node which is configured to further process the dataset as the event is streamed from the source to the sink; and specifying based on the key-value data, one or more fields of the event in the data model, wherein the one or more fields include one or more of;
a timestamp field, a source machine field, a body field, or a priority field. - View Dependent Claims (15, 16, 17, 20)
-
-
18. A method for collecting and aggregating datasets for storage in a file system with fault tolerance, the file system including agent node, a collector node, and a master node, the method comprising:
-
Collecting the dataset via an agent node operating in a remote machine, wherein the datasets include a batch messages written by the remote machine, and wherein the batch of messages are processed by the agent node when the size or a lapsed time of the batch of messages reaches a selected threshold; Generating, by the agent node, a batch identifier (ID) for the batch of messages; Assigning, by the agent node, an event tag to the batch of messages; Computing, by the agent node, a checksum for the match of messages; Writing, by the agent node, the batch of messages along with batch ID, the event tag, and the checksum as an entry in a write-ahead-log (WAL) storage maintained by the agent node on the remote machine; Transmitting, by the agent node, the dataset to a collector machine, wherein the datasets are transmitted in a data model as an event, the data model being extensible to add additional attributes to the event by a subsequent node who is configured to further process the dataset as the event is streamed from a source to a sink; Verifying the checksum by a collector node operating in the collector machine; If the checksum is verified, adding, by the collector node, a tag to a map of tags, wherein the map of tags are associated with multiple tags assigned to multiple matches of messages from the datasets; Writing, by the collector node, the dataset to a destination location; and If the batch of messages has been successfully written to the destination location, publishing, by the collector node, the tag to the master node. - View Dependent Claims (19)
-
Specification