One Click Universal Probability Calculator

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
0
Assignments
First Claim
1. A product that puts together the following features:
 1.1 Calculate probabilities for continuous and discrete data.1.2 Return an estimation of the quality/accuracy of the answer (confidence level).1.3 Based on oneclick procedure;
requiring the user to perform only the following actions;
a. Provide the sample data by importing a file or pasting/typing the data.b. Enter a value of the cutoff point x for which is desired to calculate the probability and the desired math symbol (<
, ≤
, >
, ≥
, =).c. Click on a button (or equivalent trigger) as described in Section 3.1.Note that step b might be optional. If the user does not specify them, the tool can just compute probabilities for different values of x and return all probabilities to the user.1.4 Calculate probabilities without requiring statistical knowledge from the user. It means a tool requiring from the user none of the following actions;
a) Normality test.b) Test of goodness to identify which distribution function better fits the data set.c) Use of transformation methods such as Johnson'"'"'s family of distribution.d) Knowledge of the type of the probability function (gamma, lognormal, exponential and others).e) Knowledge of the nature of the variable;
continuous or discrete.f) Frequency table.g) Utilization of an assistant in the interface of the tool where the user provides answers to a set of questions to guide him in the utilization of the correct statistical method.
0 Assignments
0 Petitions
Accused Products
Abstract
An apparatus and a method to assist people who is not an expert in statistics to calculate probabilities when in possession of a set of data. The purpose of the “One Click Universal Probability Calculator” is to be a practical and simple tool to calculate probabilities given a data set with continuous or discrete values not requiring statistical knowledge from the user. The tool is oneclick based, requiring minimum actions from the user. It also provides an estimate for the uncertainty of the calculated probability in an intuitive way for the user. All the related statistical concepts are treated in the background by our new method. The tool can be presented to the user in different ways: website/software, executable file, code library file (.dll) for integration with other software, and finally, embedded into an electronic pocket calculator.
0 Citations
No References
No References
3 Claims
 1. A product that puts together the following features:
1.1 Calculate probabilities for continuous and discrete data. 1.2 Return an estimation of the quality/accuracy of the answer (confidence level). 1.3 Based on oneclick procedure;
requiring the user to perform only the following actions;a. Provide the sample data by importing a file or pasting/typing the data. b. Enter a value of the cutoff point x for which is desired to calculate the probability and the desired math symbol (<
, ≤
, >
, ≥
, =).c. Click on a button (or equivalent trigger) as described in Section 3.1. Note that step b might be optional. If the user does not specify them, the tool can just compute probabilities for different values of x and return all probabilities to the user. 1.4 Calculate probabilities without requiring statistical knowledge from the user. It means a tool requiring from the user none of the following actions; a) Normality test. b) Test of goodness to identify which distribution function better fits the data set. c) Use of transformation methods such as Johnson'"'"'s family of distribution. d) Knowledge of the type of the probability function (gamma, lognormal, exponential and others). e) Knowledge of the nature of the variable;
continuous or discrete.f) Frequency table. g) Utilization of an assistant in the interface of the tool where the user provides answers to a set of questions to guide him in the utilization of the correct statistical method.  View Dependent Claims (2)
 3. A product benefiting from the method described in the section 3.2, applied for continuous and discrete distributions, based on the following milestones:
a. Method described in Section 3.2.1.1 allowing the split of a value between two adjacent intervals of the frequency table. b. Utilization of piecewise functions formed by two polynomial equations to estimate the cumulative function directly from the frequency table (Section 3.2.1.2). c. Utilization of a method that performs the calculations for different number of bins, and based on a quality score, combines the results of the best ones to have a final result (Algorithms 1 and
2).
1 Specification
This invention has been granted a license under 35 U.S.C. 184 with number U.S. 62/587,501. Foreign Filing License Granted: Dec. 15, 2017. Now we file a nonprovisional application for patent as described in this document.
In terms of technical field of invention, the present invention relates to statistics and probability, in particular, to a method and apparatus for assisting users who are not experts in statistics to be able to calculate probabilities in a practical and intuitive way.
The reallife environment is made of probabilistic data by nature and the ability to make decisions based on probabilities is important not only in business but also in the everyday life. It is common having a decision maker in possession of a set of data willing to assess risks by calculating the probability of obtaining a number greater or less than a specific value. An example of a common situation is given by a worker commuting to office every day. He has a data set comprised of actual travel times from home to office and he wishes to know the probability of having a travel time shorter than a desired amount of time. But considering he does not have a statistical tool or even statistical knowledge to use such tool, how could he perform such calculation? Situations like that are faced by people frequently, and because there isn'"'"'t a simple and immediate way to answer these question (from the perspective of a person with no statistical knowledge), and considering the person usually needs an answer, he is forced to estimate a number based in his intuition or based in averages, without considering properly the variation of the phenomenon he is trying to make an inference about.
In terms of the state of the prior art, available solutions in the market are able to compute probabilities for a given data sample but they demand a significant knowledge of statistics. Many people, including administrators from small companies and salespeople from stores, deal with decisions involving variation, which implies in probability calculations, and they do not have a tool that allows them to perform such calculation without having to worry about statistical concepts and assumptions. The invention offers a solution for this problem.
The invention is a practical tool to calculate probabilities given a data set comprised of continuous or discrete values without requiring statistical knowledge from the user such as: normality assumptions, goodness test, transformations, type of the probability distribution (gamma, lognormal, exponential distribution, binomial, others), frequency tables and others concepts. If the user has a data set and he wishes to calculate the probability of taking a number less than a specified value (cutoff point), he just needs to click on a single button in the interface of the product. It also provides an estimate for the uncertainty of the calculated probability in an intuitive way for the user. All the related statistical concepts are treated in the background by our new method.
Ultimately the product aims to make probability calculations more inclusive, allowing people with no statistical knowledge and people who are not experts in statistics to make those calculations in their everyday life or business.
Section 2 provides a brief description of the drawings, Section 3 gives detailed information about the product and the method. Our claims are based on two things. One is the product itself including its variants, which is described in Section 3.1 with focus on how the user interacts with the product. And the other point is the method (how the probabilities are computed), which is described in Section 3.2 with focus on the specific procedures used to compute the probabilities.
Once the product is in the market, we′d like to protect our unique interface based on one click calculation and also protect the method used to perform such calculations. Our claims are described in Section 4.
The figures listed below are explained in more details at Section 3.
The “One Click Universal Probability Calculator” is a product able to process a data set of values given by the user and able to return the probability of taking a value less/greater than the specified cutoff point. The process is seen in
The product can be seen as a machine that will process the data set using a welldefined and replicable method and then return the answer to the user. By answer we mean the probability P(X≤x) that represents the odds of getting a number smaller or equal to a cutoff point x. It also includes the probability of getting a number between two cutoff points, P(x_{1}≤X≤x_{2}), and math symbols: <, ≤, >, ≥. The other output is the confidence level which in this context means an estimate of how far the calculated probability might be from the true answer. The data set comprises the sample data, the value(s) of the cutoff point(s) and the math symbol.
All the statistical knowledge necessary to perform the calculation is embedded in the product, and applied while processing the data set, not requiring such knowledge from the user. The key is having a simple interface and an intelligent method to process the input using proper statistical concepts. There are three features that differentiates the product from others:
 1. The product is designed to require a minimum number of actions from the user. As showed in
FIG. 1 , once the data is entered, it is only necessary to press a button.  2. The product is designed to not require statistical knowledge from the user. Other tools on the market require from the user one or more of the following actions to return the same probability calculations, while our product does not require any of them:
 Normality test.
 Test of goodness to identify which probability function better fits the data set.
 Use of transformation methods such as Johnson'"'"'s family of distribution.
 Knowledge of the type or shape of the distribution function (gamma, lognormal, exponential distributions and others) and if the data is continuous or discrete.
 Interaction with a “virtual assistant”. It happens when the user needs to answer questions from an “assistant” of the tool in order to be guided through the process.
 3. The product output gives an information of how far the calculated probability might be from the true probability value.
 1. The product is designed to require a minimum number of actions from the user. As showed in
3.1) Modes of Utilization of the Product (Versions)
The “One Click Universal Probability Calculator” is a tangible product that can be available in the market in some different forms/versions such as: an executable file, a website or imbedded into a scientific calculator. Details are provided in the next sections.
The product can be commercialized as an “executable file” without interface with the user (no windows) where the input is a text file (or equivalent) and the output is another text file (or equivalent) with the results of the calculation. This mode aims to give to the client two different ways of utilization.
In one way, the user can just click on the executable file, and after that, an output file is generated with the result. In another way, it allows interaction of the product with other tools/software where a client software or program can call the executable file of the product by using something equivalent to function “system(command)” in C++ and others computer languages; and after that it is possible for the program to import the result of the calculation from the output file.
Because the probability calculation is strongly influenced by the size of the sample, we also provide the estimated range for the actual probability in the output file. Naturally, the higher the sample size, the more accurate is the answer, and it is fair to give the user an estimation of that accuracy. This information is also extended to the other forms of utilization.
Deriving from this form of utilization, in terms of integration with other software, instead of having .exe file, the computer programming with the implementation of our method can be compiled as a code library file (.dll).
Another version of the product consists of a software, opened through an executable file (.exe) or a website with an interface that allows the user to perform the actions listed in
In terms of market, the form as a software that can be opened through an executable file can be seen as a product where the customer can buy or download the files and run it from his computer. The form as a website can be seen as a service, where the operations are performed from a server also allowing access management.
3.1.3) Mode 3: Embedded into a Calculator
Another form of the product is given by embedding it into an electronic pocket calculator or scientific calculator, where the user performs the actions of
The developed methods consist of two approaches: one based on empirical distributions and other based on theoretical distributions. The outputs of these approaches can be combined based on studied criteria in order to return the final probability value to the user.
The method builds a cumulative frequency table using it to determine piecewise functions that estimate the cumulative function and then calculating the probability P (X≤x). The frequency table is strongly influenced by the number of bins used to build it. Because it is not possible to know the ideal number of bins, we build frequency tables with different number of bins, then we evaluate the quality of the frequency tables and combine the probability calculations from the best evaluated tables in order to have a final output.
The terminology is given as follows: S is a set with the sample values x_{1 }to x_{n}, b^{r }is a reference number of bins, Q is the quantity of bins to be evaluated. The functions min(S), max(S), mean(S), dev(S) computes the minimum, maximum, mean and standard deviation of a given set S. We also have the data set D with the values of the sample data, cutoff point and data structures used by the algorithm. This method is summarized in Algorithm 1.
In Algorithm 1, lines 1 to 4 initializes variables used within the loop, where p1 and p2 are parameters of the algorithm determined experimentally. Line 7 computes the number of bins and line 8 the width of the bin, both used in line 9 to build the relative frequency table (T1) and the cumulative frequency table (T2). In line 10, the function TableScore evaluates the quality of the relative frequency table returning a penalty score. In line 11, function ComputePDF calculates the required probability by determining piecewise functions from the cumulative frequency table and then estimating the cumulative function to compute the probabilities. The final result is returned by function ComputeFinalPDF in line 13, combining the results from each iteration of the main loop.
In Algorithm 1, line 9, we build the relative frequency table (T1) and differently from the traditional ones that are based on discrete numbers while counting the frequency of occurrences in each interval, our table relies on continuous numbers.
Initially we build the intervals as follows: let LB_{i }and UB_{i }be the lower bound and upper bound for the interval i, respectively. We have LB_{i}=UB_{i−1 }if i>1 and LB_{i}=min(S)−w/2 if i=1, where k_{0}, k_{f }and w are already described in Algorithm 1. We also have UB_{i}=LB_{i}+w. The frequency for interval i using the traditional approach (F_{i}^{t}) is given by counting the number of occurrences in the sample within the bounds of the respective interval, meaning that F_{i}^{t }is always a discrete number.
In our method, the frequency F_{i }is calculated by allowing to split an occurrence between two adjacent intervals which results in a continuous number. We do that as follows: let m_{i}=(LB_{i}+UB_{i})/2 be the middle point of the interval i,
and f2=1−f1, where u=UB_{i}, j=i+1, if x>m_{i }and x<UB_{i}, or u=UB_{i}, j=i−1, if x≤m_{i }and x≥UB_{i}. By doing that, the relative frequency F_{i}=F_{i}+f1 and F_{j}=F_{j}+f2, where F_{i }is initialized with zero for all intervals i before the procedure. Therefore, a given x from the sample S is counted in the interval i as a whole only if x=m_{i}, otherwise the occurrence is proportionally split between the interval i and the closest interval to x.
An interesting consequence of such method is the fact that the number of intervals with zero occurrences or equal occurrences is reduced, which might be beneficial specially for small samples. Another point is that the method does not change the total number of occurrences.
We give a numerical example to illustrate the method using the data set from Table 1.
Assuming 7 intervals, the frequency table is seen in Table 2 where we see the bounds for each interval as well as the frequency using the traditional method (F_{i}^{t}) and our method (F_{i}).
Tables T1 and T2 are created in line 9 and they are formed by b points (#bins) with values, x_{i}, i=1 . . . b. For line 11, Algorithm 1, we determine the piecewise function ƒ(x) that estimates the cumulative probability function. The function ƒ(x) is formed by two functions as described in equation (1).
where ƒ_{1}(x) estimates the left side of the cumulative function and ƒ_{2 }(x) the right side. Note there is an overlap in the interval LB_{i−1}≤x≤LB_{i}. The truncation point LB_{i}=x_{i }is given by the lower bound of the
In equation (1), ƒ_{1}(x) and ƒ_{2}(x) are thirddegree polynomial regressions of the points x_{i }from the cumulative frequency table (T2). The use of a piecewise function has showed to be superior when compared with a single function while estimating the cumulative function in preliminary experiments.
Once ƒ(x) is determined, the probability P(X≤x) can be calculated at any value x using equation (2).
where p=(x−LB_{i−1})/(LB_{i}−LB_{i−1}). The equation ƒ_{3}(x) is a combination of ƒ_{1}(x) and ƒ_{2}(x) and it works in the region of the truncation point: LB_{i}≤x≤LB_{i+1 }
Building on the data set from Table 1, the cumulative frequency table is seen in Table 3 where CF is the cumulative frequency, CF % is the cumulative frequency expressed in percentage and CF %′ is the cumulative frequency estimated by the polynomial regressions (set of equations 2).
Applying equation (2) with truncation points i=4 and i 1=5, we have:
ƒ1(x)=0.000x^{3}+0.002x^{2}−0.206x+5.983 if x<97.0 (3a)
ƒ2(x)=0.000x^{3}+0.006x^{2}−0.596x+18.534 if x>106.3 (3b)
ƒ3(x)=(1−p)ƒ_{1}(x)+pƒ_{2}(x) if 97.0≤x≤106.3 (3c)
Using
Still in Algorithm 1, the function TableScore in line 10, evaluates the quality of the relative frequency table returning a penalty score tPenal(q) for each bin size in the main loop. This is done by measuring the presence of three features:
 1. Presence of consecutive bins with relative frequency equal to zero. The higher the presence, the worse, i.e. the higher the penalty.
 2. The maximum difference between two consecutive cumulative probabilities (CF. Est. (%)_{i}−CF. Est. (%)_{i}). The higher, the worse.
 3. For bins to the left of the median of the sample, count the occurrences of situations where Rel.Freq_{i}>Rel.Freq_{i+1}. Analogue, for bins to the right of the median of the sample, the occurrence of situations where Rel.Freq_{i}<Rel.Freq_{i+1}. The more occurrences, the worse.
Finally, the final result is returned by the function ComputeFinalPDF in line 13, combining the results from each iteration of the main loop. The vector m(q) stores the calculated probability P (X≤x) and tPenal(q) stores the penalties while evaluating the quality the frequency and relative tables, for each bin size q in the main loop. The final result is given by the weighted probability: Σ_{q=1}^{q=Q}m(q)*tPenal(q), where Σ_{q=1}^{q=Q}tPenal(q)=1 and 0≤tPenal(q)≤1 for q=1, . . . , Q.
The method builds a cumulative frequency table to have an empirical cumulative distribution function, and then compare it with a set of theoretical distributions to pick the one with the best approximation. Because the frequency table is strongly influenced by the number of bins used to build it, we devise different frequency tables with different number of bins. Note that one difference here is the fact that most of the methods in the related literature use tests of goodness such as KolmogorovSmirnov and Chisquared, where the comparison is made using the empirical distribution that comes directly from the sample, not from cumulative frequency tables.
This strategy is summarized in steps described in Algorithm 2. The terminology is the same previously used in Algorithm 1.
Algorithm 2 is similar to Algorithm 1 considering that the framework of the strategy is to explore different cumulative frequency tables that comes from a different number of bins. The difference here is in line 10, where for a given cumulative table we execute the function “getBestFit” that compares the probability from the current table with a set of theoretical distributions.
The function “getBestFit” works as follows: for each theoretical distribution function d, for each value x_{i }from the cumulative frequency table, we calculate the mean error E_{d}=[Σ_{i=1}^{i=Q}abs(F^{E}(x_{i})−F^{T }(x_{i}))]/Q, where Q is the number of bins, F^{E }is the empirical cumulative probability function and F^{T }is the theoretical cumulative function. The error E_{d }is computed for each one of the following distributions: Normal, LogNormal, Gamma, Exponential and Student. After that we update tScore_{d}, where tScore_{d}=tScore_{d}+1 for the two smallest E_{d}.
In line 12 of Algorithm 2, we select one theoretical probability function using tScore_{d }and a criterion c (parameter). If c=1, we select function with the best score tScore_{d}, if c=2 we add a penalty in E_{d}, by doing E_{d}=E_{d}+pen*D, where pen is a parameter and D is the KolmogorovSmirnov test statistic: D=max(abs(G(x_{i})−F^{T }(x_{i})), where G(x_{i}) is the empirical cumulative distribution function Finally, in line 13, once we have selected the distribution function we can compute the desired probability P (X≤x).
In our method we devise an approach combining the approach using empirical distributions (Section 3.2.1) with the approach using theoretical distributions (Section 3.2.2). We start with Algorithm 2, and in line 10, function “getBestFit”, while computing the error E_{d}, we also compute OE_{d }that is de overall error for each distribution function d, along all sizes of bin in the main loop. If min(OE_{d})>trigger, then we switch to Algorithm 1, using the empirical method, where trigger is a parameter. Otherwise, we return the output given by Algorithm 2.
When computing P(X≤x), if x<min(S) or x>max(S), it is used the theoretical approach (Algorithm 2), where S is the sample given by the user. All the parameters of the method were determined by massive computational experiments using an optimization algorithm developed by ourselves (not part of this invention).
Here we summarize the results for experiments performed with the developed method aiming to demonstrate the quality of our method (part of the invention) by comparing it with other methods from the related literature, listed as follows:
 1. Empirical cumulative probability function: it is the simplest approach where
where q is me number of occurrences smaller or equal to x and Q is the sample size.
 2. Johnson system of distributions.
 3. Burr type XII distribution.
We choose Johnson and Burr distributions as benchmarks because they are very popular among professionals, researchers and products in the field. In order to test the developed method, we devise 9 instances with populations of 100000 values with the following features:
 Population 1: Normal distribution, with μ=100.12 and σ=19.74
 Population 2: Lognormal distribution, with μ=100.12 and σ=20.12
 Population 3: Lognormal distribution, with μ=100.12 and σ=39.89
 Population 4: Gamma distribution, with μ=100.02 and σ=100.05
 Population 5: Exponential distribution, with μ=100.30 and σ=20.06
 Population 6: Weibull distribution, with μ=100.10 and σ=20.06
 Population 7: Weibull distribution, with μ=100.53 and σ=49.46
 Population 8: Logistics distribution, with μ=99.78 and σ=20.32
 Population 9: Logistics distribution, with μ=100.01 and σ=58.84
Considering the accuracy of the calculation of the probability P(X≤x) is also related to the distance from x to the mean, each population is evaluated in 13 cutoff points: from the point μ−3σ to the point μ+3σ with increment of 0.5σ. It is also used 3 different sample sizes (n): 20, 30, 50. For each method, it is performed 17550 probability calculations: 9 instances, 3 sample sizes, 50 replications (different samples), 13 cutoff points (values for x). The accuracy of the methods in the experiments is measured by the mean absolute percentage error (MAPE) and it expresses accuracy as a percentage of the error.
Table 3 presents the results, reporting the overall mean of the error and the 95th percentile. We see that the developed method shows error significant smaller than the other both for the overall mean and for the 95^{th }percentile.
When calculating the probability P (X≤x), we also compute an empirical confidence level to give the user an estimation of the accuracy of the answer (how far the calculated probability might be from the true probability). In order to estimate this accuracy, we devised an experiment similar to the one described in Section 3.2.4. The computational experiment was designed using the same 9 instances, but with more replicas (200) and more values for the distance from the mean and for the sample size in order to map a broader space of combinations. For the distance from mean, we used cutoff points in the interval [−5, . . . , +5] with increment equal to 0.2 standard deviation units; and for the sample size we used values in the interval [3, . . . , 200, . . . 1000] with increment equal to 1 unit from 3 to 200 and equal to 50 units from 200 to 1000. For each combination of cutoff point and sample size, we executed 200 probability calculations (replications), measured the errors and counted the number of calculations within a given error interval among the 9 instances.
For example, to know the confidence level of having an error up to 5 percentage points, for a given distance from the mean and sample size, we counted the number of occurrences where the absolute error was smaller than 5 and divided it by 1800 (total number of calculations obtained from 9 instances and 200 replicas).
For inputs from the user where the cutoff point and sample size are different than the tested combinations, we use an interpolation from the results of the experiment.
An example of the utilization of this confidence level is seen in
3.2.6) Case with Discrete Variables
If the data entered by the user is discrete, we devise a method similar to the ones described in the previous sections, with some adjustments. We have a set of discrete theoretical distributions: Binomial, Geometric, Negative Binomial and Poisson. As described in Section 3.2.3 (combined approach), if the best approximation by a theoretical distribution returns an error greater a trigger (parameter) we use an empirical distribution as described in Algorithm 1 with few adjustments do deal with the integer nature of a discrete variable.
In order to illustrate the usefulness of the product, we show an example involving the travel time of a given worker from home to office, mentioned in Section 1 while describing the background of the invention. We assume the worker has a data set comprised of 20 values of actual travel times from home to office (Table 4) and he wishes to know the odds of having a travel time shorter than 47.5 minutes.
Considering the mode of utilization 1 (Section 3.1.1), the user just needs to provide the sample from Table 4 in the text format as seen in
Here we illustrate the usefulness of the product with real field data from the electronic industry. Data from a manufacturing plant is gathered and analyzed. The small company has an assembly line of one specific model of sensor used in refrigerators. That is a new model of sensor with no historic data. According to the specification of the sensor, it has to be activated when the temperature is 80.4 degree Celsius (° C.). An analyst collected a sample of 20 units and the manager wants to know what is the probability of taking a sensor that will be activated without the specification range; it means P(X<80.4). The analyst has no idea of the shape of the distribution and no statistical knowledge to go deeper into this analysis.
The machine is able to reject automatically the sensors activated without the specification. It is important to estimate the yield of this model because it defines the expected level of rework the operation will have to do, affecting the cost and the planning of the operation. The data is in Table 5.
Considering all samples had values greater than the specified value, a very basic analysis indicates that P(X<80.4)=0/20=0%. Table 6 gives the results using the proposed method and the benchmark (here Johnson Systems of distributions). During 1 month the analyst counted the number of rejected and approved sensors in the machine. After this time, 1534 units were produced, 339 rejected, so the actual yield was 22.1%.
Table 6 shows the probability calculated and the errors based on the actual rejection. Naturally, the yield during the month depends on others variables such as rawmaterial, equipment maintenance, setup of the machine by the user and others, but it is a reference to analyze how accurate was the probability calculation. Another point is that even for such small sample size (only 20), the tool returned a very plausible answer.
3.2.9) Comparison with Other Tools
Here we focus on differentiating our invention from others. Basically, we want to show features of the One Click Universal Probability Calculator that makes it unique besides our proposed method:
 No need of statistical knowledge from the user.
 Oneclick based: minimum actions required.
 Confidence level: output returning not only the probability value but also an estimate of the level of uncertain of the result, assisting the user in the decisionmaking process.
From our search we list similar/related products in Table 7 (ID 1 to ID 5) and our invention (ID 6):
In order to better show differences among the tools, we refer to the following problem: assume we measured the lifetime of 40 hard drive discs (data sample). What is the probability of having a disc lasting longer than 1900 hours?
Despite the fact that tools ID1, ID2 and ID3 are probability calculators, they are not able to solve the proposed problem, at least not completely. ID1 computes a probability where it is assumed the user already knows the distribution is normal. Note that this analysis would be part of the problem solving. Our invention does not require from the user knowing the type of distributions of the data. ID2 provides a “Probability Calculator” that computes the probability of a selected event based on probability of other events, which is not our case. They also have a “Gamma Function Calculator” that assumes the user already knows the data follows a Gamma distribution. They have equivalent calculators for other types of distribution. ID3 provides the “Binomial Distribution Calculator” and the “T distribution calculator”, also assuming the user knows the distribution type and the distribution parameters.
It is possible to give some answer to the proposed problem using tools ID4 and ID5 and we demonstrate how to answer the problem using such tools. Naturally, different people may use a different procedure while performing probability calculations with these tools, but we are going to use common procedures utilized by many professionals on the field.
Here we demonstrate how to solve the problem using the website prototype version of our invention,
 Step 1: Just select the math symbol on the dropdown list and type the whished value, as showed in
FIG. 10 .  Step 2: Copy the data from the table and paste it in the text box (
FIG. 11 ).
 Step 1: Just select the math symbol on the dropdown list and type the whished value, as showed in
Note there are more values on the right of the field not showed in
After clicking on “Calculate”, the output is displayed in
Note that it is returned to the user not only the calculated probability value, but also a complementary information about the confidence of the result and a tip to improve it.
Excel menu: Data>Data Analysis>Descriptive Statistics, select data sample from Table 8, then we have results in Table 9.
Kurtosis and Skewness are NOT close to zero, not too far too, but in this case, it is safer not assume the distribution is normal.
In Excel there is no straight method to deal with nonnormal data. One alternative is to assume the data is not far from normal, and use Student Distribution, with
Excel command T.DIST(−0.719,39,1), resulting in 76.2%.
Another alternative is to use an Empirical Distribution Function (EDF), as showed in the next step.
A table with the Empirical Distribution is showed as follows:
In the Empirical Distribution table, in the first column we have the data sorted in ascending order. In the second column we have for each value the amount of values smaller or equal to the current value (which coincides with the row number). In the third column we have the value of the second column divided by the sample size resulting in a cumulative frequency. Finally, in the fourth column we have the complement of the third column
We want to calculate the probability of having a value greater than 1900. In the table, the value 1900 is between lines 6 and 7 (1899.61 and 1909.35). By doing so it is possible to say that the probability is around 82.5% and 85%. Note that there is no guarantee the true value is within this interval. But for a nonnormal data, this is a simple method to give a notion of the probability.
Initially we perform a test of goodness for a normal distribution. On Minitab: Stat, Basic Statistics, Normality Test, selecting tests AndersonDarling (AD) and KolmogorovSmirnov (KS) which results are showed in
For AndersonDarling the null hypothesis of normality is rejected (pvalue<0.05). Therefore, it is not plausible to assume the distribution is normal. Because the distribution is not normal, we need to estimate the type of the distribution. Minitab menu: Stat, Quality Tools, Individual Distribution Identification. By doing so, we get the table “Goodness of Fit Test” (
In our case, the first is “Johnson Transformation”, then “BoxCox Transformation”, and after that, “Weibull”. Because the first two are transformations and not native distributions, and also, because there is no straight method to use them in Minitab, we pick the “Weibull” distribution.
Along with the table of
In the next step, on Minitab menu: Calc, Probability Distributions, Weibull. Select “cumulative probability”, type the 2 values of the parameters, in the field “input constant”, type the value 1900. By doing so, we have the answer in
We want the probability of having values greater than 1900, so we have 1−0.2098=0.7902=79.02%. Finally an answer!
First, we mention the source of the data: we generated 20000 values using the software Matlab, function: wblrnd(2042.6, 25.8773,20000,1) generating a population with Weibull distribution, mean 2000.3 and standard deviation 97.192. From that, we collected our 40 samples by chance. Because we generated the population we know the correct answer. A summary of the results is showed in Table 11.
We already mentioned that ID1, ID2 and ID3 cannot solve the problem. Regarding the others tools, we see that both in Excel (ID4) and Minitab (ID6) the assumption of normality was rejected. Because Excel does not provide a straight forward method for nonnormal distribution, we proposed the utilization of the Empirical Distribution Function just to have an idea of the probability, obtaining a value around 82.5% and 85%, which compared with the correct answer is a plausible value.
Using ID5, after a hard work identifying a suitable distribution type, its parameters, and performing the calculation, we'"'"'ve got a result of 79.02%.
For ID6 (this invention), the probability is 85.09%, with 79% confidence that the true value is between 80.09% and 90.09%. The error is smaller than Excel and Minitab, and the true value is within the estimated interval.
By this example, we see how complicated these analyses can become. It is complicated to calculate the probability, and after that, you still do not know the uncertainty of the result. The One click Universal Probability Calculator makes this calculation much easier, and also gives an estimate for the uncertainty involved. For example, we see that using ID5, the calculated probability is 79.02%. It is likely that the decision maker would believe in this result (79.02%) and make his decision. The tool ID4 (also tool ID5) does to make the user aware of how far the result might be from the true probability value.
Another point is that the user does not need to be worried with many statistical assumptions and trick details, it is everything treated by our algorithm (using the proposed method from Section 3.2.1 to Section 3.2.5) in the background.
Once the product is in the market, we'"'"'d like to protect our unique interface based on one click calculation and also protect the method used to perform such calculations.