Method and system for accelerated stream processing
First Claim
1. A system for applying parallelism to process streaming data at low latency and high throughput, the streaming data comprising data arranged in a plurality of fields, the system comprising:
- at least one member of the group consisting of (1) a reconfigurable logic device, (2) a graphics processor unit (GPU), and (3) a chip multi-processor (CMP);
wherein a processing pipeline is deployed on the at least one member for receiving and processing the streaming data, the processing pipeline including a plurality of parallel paths, each of a plurality of the parallel paths including pipelined logic for performing different processing operations on the streaming data;
wherein each of a plurality of the parallel paths includes field selection logic that filters which fields of the streaming data that downstream pipelined logic in that parallel path will process, wherein a plurality of the parallel paths include field selection logic that filter for different fields of the streaming data so that the processing pipeline is thereby configured to parallel process different fields of the streaming data in different parallel paths with different processing operations.
2 Assignments
0 Petitions
Accused Products
Abstract
Disclosed herein are methods and systems for hardware-accelerating various data processing operations in a rule-based decision-making system such as a business rules engine, an event stream processor, and a complex event stream processor. Preferably, incoming data streams are checked against a plurality of rule conditions. Among the data processing operations that are hardware-accelerated include rule condition check operations, filtering operations, and path merging operations. The rule condition check operations generate rule condition check results for the processed data streams, wherein the rule condition check results are indicative of any rule conditions which have been satisfied by the data streams. The generation of such results with a low degree of latency provides enterprises with the ability to perform timely decision-making based on the data present in received data streams.
346 Citations
50 Claims
-
1. A system for applying parallelism to process streaming data at low latency and high throughput, the streaming data comprising data arranged in a plurality of fields, the system comprising:
-
at least one member of the group consisting of (1) a reconfigurable logic device, (2) a graphics processor unit (GPU), and (3) a chip multi-processor (CMP); wherein a processing pipeline is deployed on the at least one member for receiving and processing the streaming data, the processing pipeline including a plurality of parallel paths, each of a plurality of the parallel paths including pipelined logic for performing different processing operations on the streaming data; wherein each of a plurality of the parallel paths includes field selection logic that filters which fields of the streaming data that downstream pipelined logic in that parallel path will process, wherein a plurality of the parallel paths include field selection logic that filter for different fields of the streaming data so that the processing pipeline is thereby configured to parallel process different fields of the streaming data in different parallel paths with different processing operations. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 45, 46, 47, 48, 49, 50)
-
-
20. A method for applying parallelism to process streaming data at low latency and high throughput, the streaming data comprising data arranged in a plurality of fields, the method comprising:
-
receiving the streaming data at a processing pipeline deployed on at least one member of the group consisting of (1) a reconfigurable logic device, (2) a graphics processor unit (GPU), and (3) a chip multi-processor (CMP), wherein the processing pipeline includes a plurality of parallel paths, each of a plurality of the parallel paths including field selection logic and downstream pipelined logic; and processing the streaming data through the processing pipeline, wherein the processing step comprises; for each of a plurality of the parallel paths, (1) the field selection logic filtering which fields of the streaming data that the downstream pipelined logic in that parallel path will process, and (2) the downstream logic performing a plurality of different processing operations on the filtered streaming data so that the processing pipeline thereby parallel processes different fields of the streaming data in different parallel paths with different processing operations. - View Dependent Claims (21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44)
-
Specification