Methods and systems for predictive engine evaluation, tuning, and replay of engine performance
First Claim
1. A system for evaluating and tuning a predictive engine, comprising:
- a processor;
a computer-readable working memory;
an engine variant of the predictive engine stored in the working memory, wherein the engine variant is determined by an engine parameter set specifying a plurality of algorithms utilized by the engine variant and a plurality of algorithm parameters; and
a non-transitory, computer-readable storage medium for storing program code, the program code when executed by the processor, causes the processor to perform a process to;
deploy an initial engine variant of the predictive engine based on an initial engine parameter set;
receive one or more queries to the initial predictive engine variant from an end-user device;
in response to the queries, the initial engine variant generates one or more predicted results;
receive one or more actual results corresponding to the predicted results;
associate the queries, the predicted results, and the actual results with a replay tag, and record them with the corresponding initial engine variant;
evaluate a performance of the initial engine variant by computing one or more evaluation results based on at least one evaluation metric with the queries, the predicted results, and the actual results;
generate a new engine parameter set based on tuning of one or more parameters of the initial engine parameter set, according to the evaluation results of one or more engine variants selected from the group consisting of the initial engine variant and previous engine variants, wherein one or more parameters of the new engine parameter set is selected from the group consisting of a data source, an algorithm, an algorithm parameter, and a business rule;
deploy a new engine variant of the predictive engine based on the new engine parameter set, and replace the initial engine variant with the new engine variant;
receive a replay request specified by one or more identifiers of at least one currently or previously deployed engine variant from an operator; and
in response to the replay request, replay at least one item selected from the group consisting of the queries, the corresponding predicted results, the actual results, and the evaluation results.
2 Assignments
0 Petitions
Accused Products
Abstract
Disclosed are methods and systems of creating, evaluating, and tuning a predictive engine for machine learning, including steps to deploy the predictive engine with an initial parameter set; receive queries to the deployed engine variant and in response, generate predicted results; receive corresponding actual results; associate the queries, the predicted results, and the actual results with a replay tag; evaluate the performance of the deployed engine variant; generate a new engine parameter set based on tuning of one or more parameters of the initial engine parameter set, according to the evaluation results; deploy the new engine variant to replace the initial engine variant; receive a replay request from an operator specifying the currently or a previously deployed engine variant; and in response to the replay request, replay at least one of the queries, the corresponding predicted results, the actual results, and the evaluation results.
-
Citations
20 Claims
-
1. A system for evaluating and tuning a predictive engine, comprising:
-
a processor; a computer-readable working memory; an engine variant of the predictive engine stored in the working memory, wherein the engine variant is determined by an engine parameter set specifying a plurality of algorithms utilized by the engine variant and a plurality of algorithm parameters; and a non-transitory, computer-readable storage medium for storing program code, the program code when executed by the processor, causes the processor to perform a process to; deploy an initial engine variant of the predictive engine based on an initial engine parameter set; receive one or more queries to the initial predictive engine variant from an end-user device; in response to the queries, the initial engine variant generates one or more predicted results; receive one or more actual results corresponding to the predicted results; associate the queries, the predicted results, and the actual results with a replay tag, and record them with the corresponding initial engine variant; evaluate a performance of the initial engine variant by computing one or more evaluation results based on at least one evaluation metric with the queries, the predicted results, and the actual results; generate a new engine parameter set based on tuning of one or more parameters of the initial engine parameter set, according to the evaluation results of one or more engine variants selected from the group consisting of the initial engine variant and previous engine variants, wherein one or more parameters of the new engine parameter set is selected from the group consisting of a data source, an algorithm, an algorithm parameter, and a business rule; deploy a new engine variant of the predictive engine based on the new engine parameter set, and replace the initial engine variant with the new engine variant; receive a replay request specified by one or more identifiers of at least one currently or previously deployed engine variant from an operator; and in response to the replay request, replay at least one item selected from the group consisting of the queries, the corresponding predicted results, the actual results, and the evaluation results. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method of evaluating and tuning a predictive engine, comprising:
-
deploying two or more initial engine variants of the predictive engine based on two or more initial engine parameter sets; receiving one or more queries from one or more end-users and allocating each query to one of the initial engine variants; in response to the queries, a corresponding deployed engine variant generating one or more predicted results; receiving one or more actual results corresponding to the predicted results; associating the queries, the predicted results, and the actual results with a replay tag, and recording them with the corresponding deployed engine variant; evaluating a performance of the deployed engine variants by computing one or more evaluation results based on at least one evaluation metric with the queries, the predicted results, and the actual results; generating one or more new engine parameter sets based on tuning of one or more parameters of the initial engine parameter sets, according to the evaluation results of one or more engine variants selected from the group consisting of the deployed engine variants and previously deployed variants, wherein one or more parameters of the new engine parameter sets is selected from the group consisting of a data source, an algorithm, an algorithm parameter, and a business rule; deploying one or more new engine variants of the predictive engine based on the new engine parameter sets, wherein one or more of the new engine variants may replace some of the initial engine variants; receiving a replay request specified by one or more identifiers of at least one currently or previously deployed engine variant from an operator; and in response to the replay request, replaying at least one item selected from the group consisting of the queries, the corresponding predicted results, the actual results, and the evaluation results. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification