This is our second entry on the smoothing functionality in NumXL. In an earlier entry, we discussed each smoothing function, highlighted its assumptions and parameters, and demonstrated its application through examples.
What‘s new?
Since the time we published the first entry, we have significantly enhanced the exponential smoothing function through the introduction of an option to auto-calibrate the values of the smoothing parameters.
The exponential smoothing functions (simple exponential, linear exponential, double exponential, and triple exponential functions) have a built-in optimizer which, if enabled, searches for optimal values for the smoothing parameters.
Why did we add this enhancement?
Smoothing functions are probably the most used functions in NumXL. Since the release of those functions, we have had numerous requests from customers to further streamline the smoothing process, and, as always, we’ve listened and made the needed enhancements. The smoothing functions have not changed; we just made them easier to use.
Why should I care?
Aside from the time and effort saved from finding the optimal values in Excel (refer to our first entry), you can now conduct back-testing for using exponential smoothing with your data.
What does “back-testing” mean?
This means that the built-in optimizer in the smoothing functions tries to find the best values for the smoothing parameters using only the given input data set.
Example: to compute the smoothing function value at time $t_n$, we pass the input data set $\{x_1,x_2,\cdots,x_n\}$ and the built-in optimizer uses only those n-observations to find the best values for the smoothing parameters. Next, to compute the smoothing function value at time $t_{n+1}$, we pass the following data set: $\{x_1,x_2,\cdots,x_{n+1}\}$. The optimizer will use n+1 observation to find the best values, which may be slightly different than those found earlier. In other words, the values of the smoothing parameters are re-evaluated at all times, using all prior observations.
Why not use the same (fixed) values?
In the earlier issue, we used the full data-set to find the optimal values of the data set, but this is not useful to evaluate how the functions would fare in an out of-sample situation. Using the approach above, the function only uses the information available up to this instance of time. As new information becomes available, we re-evaluate the calculation.
This is what we often refer to as back-testing.
What are the starting values of the smoothing parameters?
You have now the option of passing the starting values using the same arguments (e.g. alpha, beta and gamma) or you can omit one or more arguments, in which case the functions will fill in a fixed plug-in value (i.e. 0.33). In future implementation, we are considering using data-driven starting values based on best practices in literature.
Comments
Please sign in to leave a comment.