Nonlinear regression
Overview=
In statistics, nonlinear regression is the problem of inference for a model
- <math> y = f(x,\theta) + \varepsilon </math>
based on multidimensional <math>x</math>,<math>y</math> data, where <math>f</math> is some nonlinear function with respect to unknown parameters θ. At a minimum, we may like to obtain the parameter values associated with the best fitting curve (usually, least squares). (See the article on curve fitting.) Also, statistical inference may be needed, such as confidence intervals for parameters, or a test of whether or not the fitted model agrees well with the data.
The scope of nonlinear regression is clarified by considering the case of polynomial regression, which actually is best not treated as a case of nonlinear regression. When <math>f</math> takes a form such as
- <math>f(x) = a x^2 + bx + c </math>
our function <math>f</math> is nonlinear as a function of <math>x</math> but it is linear as a function of unknown parameters <math>a</math>, <math>b</math>, and <math>c</math>. The latter is the sense of "linear" in the context of statistical regression modeling. The appropriate computational procedures for polynomial regression are procedures of (multiple) linear regression with two predictor variables <math>x</math> and <math>x^2</math> say. However, on occasion it is suggested that nonlinear regression is needed for fitting polynomials. Practical consequences of the misunderstanding include that a nonlinear optimization procedure may be used when the solution is actually available in closed form. Also, capabilities for linear regression are likely to be more comprehensive in some software than capabilities related to nonlinear regression.
General
Linearization
Some nonlinear regression problems can be linearized by a suitable transformation of the model formulation.
For example, consider the nonlinear regression problem (ignoring the error):
- <math> y = a e^{b x}. \,\!</math>
If we take a logarithm of both sides, it becomes
- <math> \ln{(y)} = \ln{(a)} + b x, \,\!</math>
suggesting estimation of the unknown parameters by a linear regression of ln(y) on x, a computation that does not require iterative optimization. However, use of linearization requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, linearization may make your errors be distributed in a normal fashion, so the choice to perform linearization must be informed by modeling considerations.
"Linearization" as used here is not to be confused with the local linearization involved in standard algorithms such as the Gauss-Newton algorithm. Similarly, the methodology of generalized linear models does not involve linearization for parameter estimation.
Ordinary and weighted least squares
The best-fit curve is often assumed to be that which minimizes the sum of squared deviations (residuals), SSR say. This is the (ordinary) least squares (OLS) approach. However, in cases where there are different error variances for different errors, a sum of weighted squared residuals may be minimized, SSWR say, the weighted least squares (WLS) criterion. In practice the variance may depend on the fitted mean. Then in practice weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm.
In general, there is no closed-form expression for the best-fitting parameters, as there is in linear regression. Usually numerical optimization algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many local maxima of the function to be optimized. In practice, guess values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global maximum.
Monte Carlo estimation of errors
If the error of each data point is known, then the reliability of the parameters can be estimated by Monte Carlo simulation. Each data point is randomized according to its mean and standard deviation, the curve is fitted and parameters recorded. The data points are then randomized again and new parameters determined. In time, many sets of parameters will be generated and their mean and standard deviation can be calculated.[1][2]
References
- G.A.F Seber and C.J. Wild. Nonlinear Regression. New York: John Wiley and Sons, 1989.
- R.M. Bethea, B.S. Duran and T.L. Boullion. Statistical Methods for Engineers and Scientists. New York: Marcel Dekker, Inc 1985 ISBN 0-8247-7227-X
External links
- NLINLS, Nonlinear least squares by differential evolution method of global optimization: a Fortran program
- ISAT, Nonlinear regression with explicit error control
- Zunzun.com, Online curve and surface fitting
- NLREG, a proprietary program
- Matlab statistic
- simplemax.net, online optimization service