×

Scalable tuning engine

  • US 9,785,951 B1
  • Filed: 02/28/2006
  • Issued: 10/10/2017
  • Est. Priority Date: 02/28/2006
  • Status: Active Grant
First Claim
Patent Images

1. A computer implemented method for processing sales data on a network of a plurality of computers, comprising:

  • receiving data including point of sale data, cost data and product data;

    partitioning the received data into data portions each residing in a different section of a data source;

    defining a dataflow comprising transformational and numerical steps on a first computer of the plurality of computers;

    decomposing, using the first computer, the dataflow along process domains on the first computer by decomposing the dataflow into one or more from a group of distinct executable segments for parallel execution accepting inputs that are the same and distinct executable segments for parallel execution accepting inputs lacking dependencies between each other, wherein the distinct executable segments along process domains include more than one econometric operation and further wherein one process domain includes a modeling segment for generation of a demand model;

    decomposing, using the first computer, the dataflow along data domains on the first computer by decomposing the dataflow into distinct executable segments for parallel execution based on dependencies between records within the data indicated by identifying operators and dividing the distinct executable segments along the data domains by demand groups, and wherein demand groups are groupings of highly substitutable products;

    executing the distinct executable segments in parallel on a second computer of the plurality of computers and a third computer of the plurality of computers, wherein executing the distinct executable segments comprises;

    interpreting a script with programming language statements and generating from an interpretation of the script an executable graph of the dataflow comprising the transformational and numerical steps to process the received data;

    executing the executable graph via a graph execution engine that distributes the distinct executable segments among the second computer and the third computer;

    reading, in a non-sequential manner during a single reading operation, the data portions from the different sections of the data source to corresponding ones of a plurality of data buffers in parallel to increase a speed of data access, wherein each data buffer corresponds to a distinct executable segment;

    monitoring an amount of data in each of the data buffers to determine when a data buffer becomes filled for processing by the corresponding distinct executable segment, wherein at least two of the data buffers become filled and are accessed by the corresponding distinct executable segments at different times;

    executing the distinct executable segments on the second computer and the third computer in parallel, wherein each distinct executable segment is responsive to the monitoring and retrieves and processes data from a corresponding data buffer independent of other distinct executable segments when that corresponding data buffer is filled; and

    receiving the processed data from each of the distinct executable segments in parallel.

View all claims
  • 8 Assignments
Timeline View
Assignment View
    ×
    ×