Monthly Archives: March 2007

OpenCV Matrix指针的玄妙

假设定义一个Matrix:
CvMat* uv = cvCreateMat(16, 2, CV_32FC1);
float* temp = (float*)(uv->data.ptr + 8);

第一种情况:
float* temp1 = (float*)(uv->data.ptr + 8 + 4 * 2);

第二种情况:
float* temp2 = temp + 8;

我第一反应是这两种情况是一样的,而且第二种方法可以节约计算时间,因为少一次乘法,而且我也是这么做的,而且特意写成了第2种写法。

但事实上,temp1只是在temp这个指针的基础上加8个字节,而temp2是在temp的基础上加8 * sizeof(float) = 32字节,中间差了很多。这就是一下午debug的收获。

Linear least squares

http://en.wikipedia.org/wiki/Linear_least_squares

Limitations
The least squares approach relies on the calculation of the pseudo inverse (A^T\! A)^{-1} A^T. The pseudo inverse is guaranteed to exist for any full-rank matrix A. However, in some cases the matrix ATA is ill-conditioned; this occurs when the measurements are only marginally related to the estimated parameters. In these cases, the least squares estimate amplifies the measurement noise, and may be grossly inaccurate. This may occur even when the pseudo inverse itself can be accurately calculated numerically. Various regularization techniques can be applied in such cases, the most common of which is called Tikhonov regularization. If further information about the parameters is known, for example, a range of possible values of x, then minimax techniques can also be used to increase the stability of the solution.

Another drawback of the least squares estimator is the fact that it seeks to minimize the norm of the measurement error, Axb. In many cases, one is truly interested in obtaining small error in the parameter x, e.g., a small value of \|\mathbf{x}-\hat{\mathbf{x}}\|. However, since x is unknown, this quantity cannot be directly minimized. If a prior probability on x is known, then a Bayes estimator can be used to minimize the mean squared error, E \left\{ \| \mathbf{x} - \hat{\mathbf{x}} \|^2 \right\}. The least squares method is often applied when no prior is known. Surprisingly, however, better estimators can be constructed, an effect known as Stein’s phenomenon. For example, if the measurement error is Gaussian, several estimators are known which dominate, or outperform, the least squares technique; the most common of these is the James-Stein estimator.

Nonlinear Least Squares Fitting
http://mathworld.wolfram.com/NonlinearLeastSquaresFitting.html
http://statpages.org/nonlin.html
http://www.itl.nist.gov/div898/strd/general/bkground.html