# NxLOCREG - Local or moving polynomial regression

Calculates the local/moving non-parametric regression (i.e. LOESS, LOWESS, etc.) forecast.

## Syntax

NxLOCREG(X, YPKernel, Alpha, Optimize, Target, Return)
X
is the x-component of the input data table (a one dimensional array of cells (e.g. rows or columns)).
Y
is the y-component (i.e. function) of the input data table (a one dimensional array of cells (e.g. rows or columns)).
P
is the polynomial order (0 = constant, 1= linear, 2=Quadratic, 3=Cubic, etc.), etc.). If missing, P = 0.
Kernel
is the weighting kernel function used with KNN-Regression method : 0(or missing)=Uniform, 1=Triangular, 2=Epanechnikov, 3=Quartic, 4=Triweight, 5=Tricube, 6=Gaussian, 7=Cosine, 8=Logistic, 9= Sigmoid, 10= Silverman.
Value Kernel
0 Uniform Kernel (default)
1 Triangular Kernel
2 Epanechnikov Kernel
3 Quartic Kernel
4 Triweight Kernel
5 Tricube Kernel
6 Gaussian Kernel
7 Cosine Kernel
8 Logistic Kernel
9 Sigmoid Kernel
10 Silverman Kernel
Alpha
is the fraction of the total number (n) of data points that are used in each local fit (between 0 and 1). If missing or omitted, Alpha = 0.333.
Optimize
is a flag (True/False) for searching and using optimal integer value K (i.e. number of data points). If missing or omitted, optimize is assumed False.
target
is the desired x-value(s) to interpolate for (a single value or a one-dimensional array of cells (e.g. rows or columns)).
Return
is a number that determines the type of return value: 0=Forecast (default), 1=errors, 2=Smoothing parameter (bandwidth), 3=RMSE (CV). If missing or omitted, NxREG returns forecast/regression value(s).
Return Description
0 Forecast/Regression value(s) (default)
1 Forecast/Regression error(s)
3 RMSE (cross-validation)

## Remarks

1. Local regression is a non-parametric regression method that combines multiple regression models in a k-nearest-neighbor-based meta-model.
2. Local regression or local polynomial regression, also known as moving regression, is a generalization of moving average and polynomial regression.
3. Its most common methods initially developed for scatterplot smoothing, are LOESS (locally estimated scatterplot smoothing) and LOWESS (locally weighted scatterplot smoothing).
4. Outside econometrics, LOESS is known and commonly referred to as Savitzky–Golay filter. Savitzky–Golay filter was proposed 15 years before LOESS.
5. $\alpha$ is called the smoothing parameter because it controls the flexibility of the LOESS regression function. Large values of ${\displaystyle \alpha }$ produce the smoothest functions that wiggle the least in response to fluctuations in the data. The smaller ${\displaystyle \alpha }$ is, the closer the regression function will conform to the data. Using too small a value of the smoothing parameter is not desirable, however, since the regression function will eventually start to capture the random error in the data.
6. The number of rows of the response variable (Y) must be equal to the number of rows of the explanatory variable (X).
7. Observations (i.e. rows) with missing values in X or Y are removed.
8. NxLOCREG is related to NxKREG, but NxLOCRTEG uses the nearest K-NN points to calculate kernel bandwidth and conduct its local regression.
9. The time series may include missing values (e.g. #N/A) at either end.
10. The NxLOCREG() function is available starting with version 1.66 PARSON.

## References

• Pagan, A.; Ullah, A. (1999). Nonparametric Econometrics. Cambridge University Press. ISBN 0-521-35564-8.
• Simonoff, Jeffrey S. (1996). Smoothing Methods in Statistics. Springer. ISBN 0-387-94716-7.
• Li, Qi; Racine, Jeffrey S. (2007). Nonparametric Econometrics: Theory and Practice. Princeton University Press. ISBN 0-691-12161-3.
• Henderson, Daniel J.; Parmeter, Christopher F. (2015). Applied Nonparametric Econometrics. Cambridge University Press. ISBN 978-1-107-01025-3.
• Gyorfi, L., Kohler, M., Krzyzak, A. & Walk, H. (2002), A distribution-free Theory of Nonparametric Regression, Springer, New York.
• Hastie, T. & Tibshirani, R. (1990), Generalized Additive Models, Chapman and Hall, London.
• Hastie, T., Tibshirani, R. & Friedman, J. (2009), The Elements of Statistical Learning; Data Mining, Inference, and Prediction, Springer, New York. Second edition.
• Johnstone, I. (2011), Gaussian estimation: Sequence and wavelet models, Under contract to Cambridge University Press. Online version at http://www-stat.stanford.edu/~imj.
• Kim, S.-J., Koh, K., Boyd, S. & Gorinevsky, D. (2009), ‘`1 trend filtering’, SIAM Review 51(2), 339–360.
• Kimeldorf, G. & Wahba, G. (1970), ‘A correspondence between Bayesian estimation on stochastic processes and smoothing by splines’, Annals of Mathematical Statistics 41(2), 495–502.
• Lin, Y. & Zhang, H. H. (2006), ‘Component selection and smoothing in multivariate nonparametric regression’, Annals of Statistics 34(5), 2272–2297.
• Mallat, S. (2008), A wavelet tour of signal processing, Academic Press, San Diego. Third edition.
• Mammen, E. & van de Geer, S. (1997), ‘Locally adaptive regression splines’, Annals of Statistics 25(1), 387–413.
• Raskutti, G., Wainwright, M. & Yu, B. (2012), ‘Minimax-optimal rates for sparse additive models over kernel classes via convex programming, Journal of Machine Learning Research 13, 389–427.
• Ravikumar, P., Liu, H., Lafferty, J. & Wasserman, L. (2009), ‘Sparse additive models’, Journal of the Royal Statistical Society: Series B 75(1), 1009–1030.
• Scholkopf, B. & Smola, A. (2002), ‘Learning with kernels’.
• Silverman, B. (1984), ‘Spline smoothing: the equivalent variable kernel method’, 12(3), 898–916.
• Simonoff, J. (1996), Smoothing Methods in Statistics, Springer, New York.
• Steidl, G., Didas, S. & Neumann, J. (2006), ‘Splines in higher-order TV regularization’, International Journal of Computer Vision 70(3), 214–255.
• Stone, C. (1985), ‘Additive regression models and other nonparametric models’, Annals of Statistics 13(2), 689–705.
• Tibshirani, R. J. (2014), ‘Adaptive piecewise polynomial estimation via trend filtering’, Annals of Statistics 42(1), 285–323.
• Tsybakov, A. (2009), Introduction to Nonparametric Estimation, Springer, New York.
• Wahba, G. (1990), Spline Models for Observational Data, Society for Industrial and Applied Mathematics, Philadelphia.
• Wang, Y., Smola, A. & Tibshirani, R. J. (2014), ‘The falling factorial basis and its statistical properties, International Conference on Machine Learning 31.
• Wasserman, L. (2006), All of Nonparametric Statistics, Springer, New York.