圆拟合算法

参考转自 https://people.cas.uab.edu/~mosya/cl/CPPcircle.html

Geometric circle fits

  Algebraic circle fits

 

Levenberg-Marquardt fit in the "full" (a,b,R) space 
    (perhaps the best geometric circle fit) https://people.cas.uab.edu/~mosya/cl/CircleFitByLevenbergMarquardtFull.cpp

Levenberg-Marquardt fit in the "reduced" (a,b) space
    (may be a little faster than above in favorable cases)

https://people.cas.uab.edu/~mosya/cl/CircleFitByLevenbergMarquardtReduced.cpp

Chernov-Lesort fit 
    (designed to converge from any initial guess,
    but slower that the Levenberg-Marquardt)

https://people.cas.uab.edu/~mosya/cl/CircleFitByChernovLesort.cpp

Chernov-Houssam fit     (designed to converge from any initial guess;
    employs numerically stable formulas for large circles)

https://people.cas.uab.edu/~mosya/cl/CircleFitByChernovHoussam.cpp

Note: every geometric fit must be supplied with an initial guess.
Use an algebraic fit for this purpose. We recommend Taubin fit.

   

Kasa fit     (the simplest and fastest fit, but biased toward smaller circles when an incomplete arc is observed)

https://people.cas.uab.edu/~mosya/cl/CircleFitByKasa.cpp

Pratt fit 
    (more robust than Kasa fit, but a little slower)https://people.cas.uab.edu/~mosya/cl/CircleFitByPratt.cpp

Taubin fit 
    (similar to Pratt fit, but a bit faster and a bit more accurate)
           (perhaps the best algebraic circle fit) https://people.cas.uab.edu/~mosya/cl/CircleFitByTaubin.cpp

Hyper fit 
    (a new fit: a combination of Pratt and Taubin fits that eliminates essential bias; the speed is the same as that of Pratt fit)https://people.cas.uab.edu/~mosya/cl/CircleFitByHyper.cpp

Levenberg-Marquardt fit in the "full" (a,b,R) space
int CircleFitByLevenbergMarquardtFull (Data& data, Circle& circleIni, reals LambdaIni, Circle& circle)
/*                                     <------------------ Input ------------------->  <-- Output -->

       Geometric circle fit to a given set of data points (in 2D)
		
       Input:  data     - the class of data (contains the given points):
		
	       data.n   - the number of data points
	       data.X[] - the array of X-coordinates
	       data.Y[] - the array of Y-coordinates
		          
               circleIni - parameters of the initial circle ("initial guess")
		        
	       circleIni.a - the X-coordinate of the center of the initial circle
	       circleIni.b - the Y-coordinate of the center of the initial circle
	       circleIni.r - the radius of the initial circle
		        
	       LambdaIni - the initial value of the control parameter "lambda"
	                   for the Levenberg-Marquardt procedure
	                   (common choice is a small positive number, e.g. 0.001)
		        
       Output:
	       integer function value is a code:
	                  0:  normal termination, the best fitting circle is 
	                      successfully found
	                  1:  the number of outer iterations exceeds the limit (99)
	                      (indicator of a possible divergence)
	                  2:  the number of inner iterations exceeds the limit (99)
	                      (another indicator of a possible divergence)
	                  3:  the coordinates of the center are too large
	                      (a strong indicator of divergence)
		                   
	       circle - parameters of the fitting circle ("best fit")
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.i - the total number of outer iterations (updating the parameters)
 	       circle.j - the total number of inner iterations (adjusting lambda)
 		        
       Algorithm:  Levenberg-Marquardt running over the full parameter space (a,b,r)
                         
       See a detailed description in Section 4.5 of the book by Nikolai Chernov:
       "Circular and linear regression: Fitting circles and lines by least squares"
       Chapman & Hall/CRC, Monographs on Statistics and Applied Probability, volume 117, 2010.
         
		Nikolai Chernov,  February 2014
*/
{
    int code,i,iter,inner,IterMAX=99;
    
    reals factorUp=10.,factorDown=0.04,lambda,ParLimit=1.e+6;
    reals dx,dy,ri,u,v;
    reals Mu,Mv,Muu,Mvv,Muv,Mr,UUl,VVl,Nl,F1,F2,F3,dX,dY,dR;
    reals epsilon=3.e-8;
    reals G11,G22,G33,G12,G13,G23,D1,D2,D3;
    
    Circle Old,New;
    
//       starting with the given initial circle (initial guess)
    
    New = circleIni;
    
//       compute the root-mean-square error via function Sigma; see Utilities.cpp

    New.s = Sigma(data,New);
    
//       initializing lambda, iteration counters, and the exit code
    
    lambda = LambdaIni;
    iter = inner = code = 0;
    
NextIteration:
	
    Old = New;
    if (++iter > IterMAX) {code = 1;  goto enough;}
    
//       computing moments
    
    Mu=Mv=Muu=Mvv=Muv=Mr=0.;
    
    for (i=0; i<data.n; i++)
    {
        dx = data.X[i] - Old.a;
        dy = data.Y[i] - Old.b;
        ri = sqrt(dx*dx + dy*dy);
        u = dx/ri;
        v = dy/ri;
        Mu += u;
        Mv += v;
        Muu += u*u;
        Mvv += v*v;
        Muv += u*v;
        Mr += ri;
    }
    Mu /= data.n;
    Mv /= data.n;
    Muu /= data.n;
    Mvv /= data.n;
    Muv /= data.n;
    Mr /= data.n;
    
//       computing matrices
    
    F1 = Old.a + Old.r*Mu - data.meanX;
    F2 = Old.b + Old.r*Mv - data.meanY;
    F3 = Old.r - Mr;
    
    Old.g = New.g = sqrt(F1*F1 + F2*F2 + F3*F3);
    
try_again:
	
    UUl = Muu + lambda;
    VVl = Mvv + lambda;
    Nl = One + lambda;
    
//         Cholesly decomposition
    
    G11 = sqrt(UUl);
    G12 = Muv/G11;
    G13 = Mu/G11;
    G22 = sqrt(VVl - G12*G12);
    G23 = (Mv - G12*G13)/G22;
    G33 = sqrt(Nl - G13*G13 - G23*G23);
    
    D1 = F1/G11;
    D2 = (F2 - G12*D1)/G22;
    D3 = (F3 - G13*D1 - G23*D2)/G33;
    
    dR = D3/G33;
    dY = (D2 - G23*dR)/G22;
    dX = (D1 - G12*dY - G13*dR)/G11;
    
    if ((abs(dR)+abs(dX)+abs(dY))/(One+Old.r) < epsilon) goto enough;
    
//       updating the parameters
    
    New.a = Old.a - dX;
    New.b = Old.b - dY;
    
    if (abs(New.a)>ParLimit || abs(New.b)>ParLimit) {code = 3; goto enough;}
    
    New.r = Old.r - dR;
    
    if (New.r <= 0.)
    {
        lambda *= factorUp;
        if (++inner > IterMAX) {code = 2;  goto enough;}
        goto try_again;
    }
    
//       compute the root-mean-square error via function Sigma; see Utilities.cpp

    New.s = Sigma(data,New);   
    
//       check if improvement is gained
    
    if (New.s < Old.s)    //   yes, improvement
    {
        lambda *= factorDown;
        goto NextIteration;
    }
    else                       //   no improvement
    {
        if (++inner > IterMAX) {code = 2;  goto enough;}
        lambda *= factorUp;
        goto try_again;
    }
    
    //       exit
    
enough:
	
    Old.i = iter;    // total number of outer iterations (updating the parameters)
    Old.j = inner;   // total number of inner iterations (adjusting lambda)
    
    circle = Old;
    
    return code;
}
Levenberg-Marquardt fit in the "reduced" (a,b) space
 int CircleFitByLevenbergMarquardtReduced (Data& data, Circle& circleIni, reals LambdaIni, Circle& circle)
/*                                        <------------------ Input ------------------->  <-- Output -->

       Geometric circle fit to a given set of data points (in 2D)
		
       Input:  data     - the class of data (contains the given points):
		
	       data.n   - the number of data points
	       data.X[] - the array of X-coordinates
	       data.Y[] - the array of Y-coordinates
		          
               circleIni - parameters of the initial circle ("initial guess")
		        
	       circleIni.a - the X-coordinate of the center of the initial circle
	       circleIni.b - the Y-coordinate of the center of the initial circle
	       circleIni.r - the radius of the initial circle
		        
	       LambdaIni - the initial value of the control parameter "lambda"
	                   for the Levenberg-Marquardt procedure
	                   (common choice is a small positive number, e.g. 0.001)
		        
       Output:
	       integer function value is a code:
	                  0:  normal termination, the best fitting circle is 
	                      successfully found
	                  1:  the number of outer iterations exceeds the limit (99)
	                      (indicator of a possible divergence)
	                  2:  the number of inner iterations exceeds the limit (99)
	                      (another indicator of a possible divergence)
	                  3:  the coordinates of the center are too large
	                      (a strong indicator of divergence)
		                   
	       circle - parameters of the fitting circle ("best fit")
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.i - the total number of outer iterations (updating the parameters)
 	       circle.j - the total number of inner iterations (adjusting lambda)
 		        
       Algorithm:  Levenberg-Marquardt running over the "reduced" parameter space (a,b)
                   (the radius r is always set to its optimal value)
                         
       See a detailed description in Section 4.6 of the book by Nikolai Chernov:
       "Circular and linear regression: Fitting circles and lines by least squares"
       Chapman & Hall/CRC, Monographs on Statistics and Applied Probability, volume 117, 2010.
         
		Nikolai Chernov,  February 2014
*/
{
    int code,i,pivot,iter,inner,IterMAX=99;

    reals factorUp=10.,factorDown=0.04,lambda,ParLimit=1.e+6;
    reals dx,dy,ri,u,v;
    reals Mu,Mv,Muu,Mvv,Muv,Mr,A11,A12,A22,F1,F2,dX,dY;
    reals epsilon=3.e-8;
    reals G11,G12,G22,D1,D2;

    Circle Old,New;

    data.means();   // Compute x- and y-means (via a function in class "data") 
    
//       starting with the given initial circle (initial guess)

    New = circleIni;
    
//     compute the root-mean-square error via function SigmaReduced; see Utilities.cpp

    New.s = SigmaReduced(data,New);

//     initializing lambda, iteration counters, and the exit code

    lambda = LambdaIni;
    iter = inner = code = 0;

NextIteration:

    Old = New;
    if (++iter > IterMAX) {code = 1;  goto enough;}

//     computing moments

    Mu=Mv=Muu=Mvv=Muv=Mr=0.;

    for (i=0; i<data.n; i++)
    {
        dx = data.X[i] - Old.a;
        dy = data.Y[i] - Old.b;
        ri = sqrt(dx*dx + dy*dy);
        u = dx/ri;
        v = dy/ri;
        Mu += u;
        Mv += v;
        Muu += u*u;
        Mvv += v*v;
        Muv += u*v;
        Mr += ri;
    }
    Mu /= data.n;
    Mv /= data.n;
    Muu /= data.n;
    Mvv /= data.n;
    Muv /= data.n;
    Mr /= data.n;

//    computing matrices

    F1 = Old.a + Mu*Mr - data.meanX;
    F2 = Old.b + Mv*Mr - data.meanY;

try_again:

    A11 = (Muu - Mu*Mu) + lambda;
    A22 = (Mvv - Mv*Mv) + lambda;
    A12 = Muv - Mu*Mv;

    if (A11<epsilon || A22<epsilon || A11*A22-A12*A12<epsilon)
    {
        lambda *= factorUp;
        goto try_again;
    }

//      Cholesky decomposition with pivoting

    pivot=0;
    if (A11 < A22)
    {
       	swap(A11,A22);
    	swap(F1,F2);
        pivot=1;
    }

    G11 = sqrt(A11);
    G12 = A12/G11;
    G22 = sqrt(A22 - G12*G12);

    D1 = F1/G11;
    D2 = (F2 - G12*D1)/G22;

//    updating the parameters

    dY = D2/G22;
    dX = (D1 - G12*dY)/G11;
    
    if ((abs(dX)+abs(dY))/(One+abs(Old.a)+abs(Old.b)) < epsilon) goto enough;

    if (pivot != 0) swap(dX,dY);

    New.a = Old.a - dX;
    New.b = Old.b - dY;

    if (abs(New.a)>ParLimit || abs(New.b)>ParLimit) {code = 3; goto enough;}
    
//    compute the root-mean-square error via function SigmaReduced; see Utilities.cpp

    New.s = SigmaReduced(data,New);

//      check if improvement is gained

    if (New.s < Old.s)    //    yes, improvement
    {
        lambda *= factorDown;
        goto NextIteration;
    }
    else                  //  no improvement
    {
        if (++inner > IterMAX) {code = 2;  goto enough;}
        lambda *= factorUp;
        goto try_again;
    }

//       exit

enough:
    
//    compute the optimal radius via a function in Utilities.cpp

    Old.r = OptimalRadius(data,Old);
    Old.i = iter;
    Old.j = inner;
    
    circle = Old;
    
    return code;
}

Chernov-Lesort fit 

Chernov-Lesort fit
 int CircleFitByChernovLesort (Data& data, Circle& circleIni, reals LambdaIni, Circle& circle) 
/*                            <------------------ Input ------------------->  <-- Output -->

       Geometric circle fit to a given set of data points (in 2D)
		
       Input:  data     - the class of data (contains the given points):
		
	       data.n   - the number of data points
	       data.X[] - the array of X-coordinates
	       data.Y[] - the array of Y-coordinates
		          
               circleIni - parameters of the initial circle ("initial guess")
		        
	       circleIni.a - the X-coordinate of the center of the initial circle
	       circleIni.b - the Y-coordinate of the center of the initial circle
	       circleIni.r - the radius of the initial circle
		        
	       LambdaIni - the initial value of the control parameter "lambda"
	                   for the Levenberg-Marquardt procedure
	                   (common choice is a small positive number, e.g. 0.001)
		        
       Output:
	       integer function value is a code:
	                  0:  normal termination, the best fitting circle is 
	                      successfully found
	                  1:  the number of outer iterations exceeds the limit (99)
	                      (indicator of a possible divergence)
	                  2:  the number of inner iterations exceeds the limit (99)
	                      (another indicator of a possible divergence)
	                  3:  the coordinates of the center are too large
	                      (a strong indicator of divergence)
		                   
	       circle - parameters of the fitting circle ("best fit")
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.i - the total number of outer iterations (updating the parameters)
 	       circle.j - the total number of inner iterations (adjusting lambda)
 		        
       Algorithm by Nikolai Chernov and Claire Lesort
                         
       See a detailed description in the journal paper:
       
       N. Chernov and C. Lesort, "Least squares fitting of circles"
          in J. Math. Imag. Vision, volume 23, (2005), pages 239-251.
          
       the algorithm is designed to converge from any initial guess,
       but it is complicated and generally very slow
         
		Nikolai Chernov,  February 2014
*/
{
    int code,i,iter,inner,IterMAX=99;
    
    reals factorUp=10.,factorDown=0.04,lambda;
    reals Aold,Fold,Told,Anew,Fnew,Tnew,DD,H,aabb;
    reals Xi,Yi,Zi,Ui,Vi,Gi,CT,ST,D,ADF,SQ,DEN,FACT,DGDAi,DGDFi,DGDTi;
    reals H11,H12,H13,H22,H23,H33,F1,F2,F3,dA,dF,dT;
    reals epsilon=3.e-8;  
    reals G11,G22,G33,G12,G13,G23,D1,D2,D3;
    reals Xshift=0.,Yshift=0.,dX=One,dY=0.,aTemp,bTemp,rTemp;
    
    Circle Old,New;
    
//       starting with the given initial circle (initial guess)
    
    New = circleIni;
    
//       compute the root-mean-square error via function Sigma; see Utilities.cpp

    New.s = Sigma(data,New);
    
    Anew = One/Two/New.r;
    aabb = New.a*New.a + New.b*New.b;
    Fnew = (aabb - New.r*New.r)*Anew;
    Tnew = acos(-New.a/sqrt(aabb));
    if (New.b > 0.) Tnew = Two*Pi - Tnew;
    
    if (One+Four*Anew*Fnew < epsilon) 
    {
        Xshift += dX;
        Yshift += dY;
        
        New.a += dX;
        New.b += dY;
        aabb = New.a*New.a + New.b*New.b;
        Fnew = (aabb - New.r*New.r)*Anew;
        Tnew = acos(-New.a/sqrt(aabb));
        if (New.b > 0.) Tnew = Two*Pi - Tnew;
    }
    
//       initializing lambda, iteration counters, and the exit code
    
    lambda = LambdaIni;
    iter = inner = code = 0;
    
NextIteration:
    
    Aold = Anew;
    Fold = Fnew;
    Told = Tnew;
    Old = New;
    
    if (++iter > IterMAX) {code = 1;  goto enough;}
    
//       computing moments
    
shiftXY:
	
    DD = One + Four*Aold*Fold;
    D = sqrt(DD);
    CT = cos(Told);
    ST = sin(Told);
    
    H11=H12=H13=H22=H23=H33=F1=F2=F3=0.;
    
    for (i=0; i<data.n; i++)
    {
        Xi = data.X[i] + Xshift;
        Yi = data.Y[i] + Yshift;
        Zi = Xi*Xi + Yi*Yi;
        Ui = Xi*CT + Yi*ST;
        Vi =-Xi*ST + Yi*CT;
        
        ADF = Aold*Zi + D*Ui + Fold;
        SQ = sqrt(Four*Aold*ADF + One);
        DEN = SQ + One;
        Gi = Two*ADF/DEN;
        FACT = Two/DEN*(One - Aold*Gi/SQ);
        DGDAi = FACT*(Zi + Two*Fold*Ui/D) - Gi*Gi/SQ;
        DGDFi = FACT*(Two*Aold*Ui/D + One);
        DGDTi = FACT*D*Vi;
        
        H11 += DGDAi*DGDAi;
        H12 += DGDAi*DGDFi;
        H13 += DGDAi*DGDTi;
        H22 += DGDFi*DGDFi;
        H23 += DGDFi*DGDTi;
        H33 += DGDTi*DGDTi;
        
        F1 += Gi*DGDAi;
        F2 += Gi*DGDFi;
        F3 += Gi*DGDTi;
    }
    Old.g = New.g = sqrt(F1*F1 + F2*F2 + F3*F3);
    
try_again:
	
//        Cholesky decomposition
	
    G11 = sqrt(H11 + lambda);
    G12 = H12/G11;
    G13 = H13/G11;
    G22 = sqrt(H22 + lambda - G12*G12);
    G23 = (H23 - G12*G13)/G22;
    G33 = sqrt(H33 + lambda - G13*G13 - G23*G23);
    
    D1 = F1/G11;
    D2 = (F2 - G12*D1)/G22;
    D3 = (F3 - G13*D1 - G23*D2)/G33;
    
    dT = D3/G33;
    dF = (D2 - G23*dT)/G22;
    dA = (D1 - G12*dF - G13*dT)/G11;
    
//       updating the parameters
    
    Anew = Aold - dA;
    Fnew = Fold - dF;
    Tnew = Told - dT;
    
    if (One+Four*Anew*Fnew < epsilon) 
    {
        Xshift += dX;
        Yshift += dY;
        
        H = sqrt(One+Four*Aold*Fold);
        aTemp = -H*cos(Told)/(Aold+Aold) + dX;
        bTemp = -H*sin(Told)/(Aold+Aold) + dY;
        rTemp = One/abs(Aold+Aold);
        
        Aold = One/(rTemp + rTemp);
        aabb = aTemp*aTemp + bTemp*bTemp;
        Fold = (aabb - rTemp*rTemp)*Aold;
        Told = acos(-aTemp/sqrt(aabb));
        if (bTemp > 0.) Told = Two*Pi - Told;
        
        lambda *= factorUp;
        inner++;
        goto shiftXY;
    }
    
    H = sqrt(One+Four*Anew*Fnew);
    New.a = -H*cos(Tnew)/(Anew+Anew) - Xshift;
    New.b = -H*sin(Tnew)/(Anew+Anew) - Yshift;
    New.r = One/abs(Anew+Anew);
    New.s = Sigma(data,New);
    
    if ((abs(New.a-Old.a) + abs(New.b-Old.b) + abs(New.r-Old.r))/(One + Old.r) < epsilon) goto enough;
    
//       check if improvement is gained
    
    if (New.s < Old.s)    //   yes, improvement
    {
    	lambda *= factorDown;
        goto NextIteration;
    }
    else                       //   no improvement
    {
        if (++inner > IterMAX) {code = 2;  goto enough;}
        lambda *= factorUp;
        goto try_again;
    }
    
    //       exit
    
enough:
	
    Old.i = iter;
    Old.j = inner;
    
    circle = Old;
    
    return code;
}

//****************** Perturb *********************************

Circle Perturb (Circle& New, Circle& Old, reals range)
{
    Circle Perturbed;

    if (range==0.) return New;

    Perturbed.a = New.a + (New.a - Old.a)*(range*rand()/RAND_MAX - range/Two);
    Perturbed.b = New.b + (New.b - Old.b)*(range*rand()/RAND_MAX - range/Two);
    Perturbed.r = New.r + (New.r - Old.r)*(range*rand()/RAND_MAX - range/Two);

    return Perturbed;
}

Chernov-Houssam fit 

 

Chernov-Houssam fit
 void eigen2x2(reals a, reals b, reals c, reals& d1, reals& d2, reals& Vx, reals& Vy)
/*            <------- Input ----------> <--------------- Output ----------------->

       Eigendecomposition of a symmetric 2x2 matrix
          faster and more accurate than the library function
       
       Input:  a,b,c - components of the matrix [a c
                                                 c b]
       Output:  d1,d2 - eigenvalues
                Vx,Vy - unit eigenvector for d1
                
                The eigenvector for d2 need not be computed, it is (-Vy,Vx)
                
       Note:  d1 is the leading eigenvalue, i.e., |d1|>=|d2|
       
       Nikolai Chernov,  June 2012

*/
{
    reals disc,f;

    disc = pythag(a-b,Two*c);    // discriminant

    d1 = (a+b > 0.) ? (a + b + disc)/Two : (a + b - disc)/Two;
    d2 = (a*b - c*c)/d1;

    if (abs(a-d1) > abs(b-d1))
    {
        if ((f=pythag(c,d1-a))==0.) 
        {
            Vx = One; Vy = 0.;  return;
        }
        else       
        {
            Vx = c/f;  Vy = (d1-a)/f;
        }
    }
    else
    {
        if ((f=pythag(c,d1-b))==0.) 
        {
            Vx = One; Vy = 0.;  return;
        }
        else       
        {
            Vx = (d1-b)/f;  Vy = c/f;
        }
    }
    
    return;
}

reals SigmaWithLargeCircleOption (Data& data, Circle& circle)
/*                                <-------- Input --------->

		Compute the objective function	for the geometric circle fitting problem
		
		Input:  data     - the class of data (contains the given points):
		
		        data.n   - the number of data points
		        data.X[] - the array of X-coordinates
		        data.Y[] - the array of Y-coordinates
		        data.meanX - the mean X-coordinate
		        data.meanY - the mean Y-coordinate
		          (the last two must be precomputed)
		          
		        circle    - the class of circle parameters:
		        
		        circle.a - the X-coordinate of the center
		        circle.b - the Y-coordinate of the center
		        
		Output:
		        the value of the objective function
		        (more precisely, the square root of the average square of the distance) 
          
		Nikolai Chernov,  January 2013
*/
{
    int i,n=data.n;
    reals sum=0.,dx,dy,r,D[n];
    reals LargeCircle=Ten,a0,b0,del,s,c,x,y,z,p,t,g,W,Z;
   
    if (abs(circle.a)<LargeCircle && abs(circle.b)<LargeCircle)   // main case (not a large circle)
    {
    	for (i=0; i<n; i++)
    	{
    		dx = data.X[i] - circle.a;
    		dy = data.Y[i] - circle.b;
    		D[i] = sqrt(dx*dx+dy*dy);
    		sum += D[i];
    	}
    	r = sum/n;
    	
    	for (sum=0., i=0; i<n; i++)  sum += SQR(D[i] - r);
    	
    	return sum/n;
    }
    else   //  case of a large circle
    {
    	a0 = circle.a-data.meanX;  b0 = circle.b-data.meanY;
    	del = One/sqrt(a0*a0 + b0*b0);
    	s = b0*del;  c = a0*del;
    	
    	for (W=Z=0.,i=0; i<n; i++)
    	{
    		x = data.X[i] - data.meanX;
    		y = data.Y[i] - data.meanY;
    		z = x*x + y*y;
    		p = x*c + y*s;
    		t = del*z - Two*p;
    		g = t/(One+sqrt(One+del*t));
    		W += (z+p*g)/(Two+del*g);
    		Z += z;
    	}
    	W /= n;
    	Z /= n;
    	
    	return Z-W*(Two+del*del*W);
    }
}

void GradientHessian (Data& data, Circle& circle, reals& F1, reals& F2, reals& A11, reals& A22, reals& A12)
/*                    <-------- Input --------->  <----------------------- Output ----------------------->

		Compute the gradient vector and the Hessian matrix of the objective function
		        for the geometric circle fitting problem
		
		Input:  data     - the class of data (contains the given points):
		
		        data.n   - the number of data points
		        data.X[] - the array of X-coordinates
		        data.Y[] - the array of Y-coordinates
		        data.meanX - the mean X-coordinate
		        data.meanY - the mean Y-coordinate
		          (the last two must be precomputed)
		          
		        circle    - the class of circle parameters:
		        
		        circle.a - the X-coordinate of the center
		        circle.b - the Y-coordinate of the center
		        
		Output:
		        [F1 F2]  - the coordinates of the gradient vector
		        
		        A11 A12  - the components of the Hessian matrix
		        A12 A22    (it is symmetric, so only three are computed)
          
		Nikolai Chernov,  January 2013
*/
{
	int i,n=data.n;
    reals LargeCircle=Ten,dx,dy,r,u,v,Mr,Mu,Mv,Muu,Mvv,Muv,Muur,Mvvr,Muvr;
	reals a0,b0,del,dd,s,c,x,y,a,b,z,p,t,w,g,g1,gg1,gg2;
	reals X,Y,R,U,V,T,W,AA,BB,AB,AG,BG,GG,UUR,VVR,UVR;
    
    if (abs(circle.a)<LargeCircle && abs(circle.b)<LargeCircle)   // main case (not a large circle)
    {
    	for (Mr=Mu=Mv=Muu=Mvv=Muv=Muur=Mvvr=Muvr=0.,i=0; i<n; i++)
    	{
    		dx = data.X[i] - circle.a;
    		dy = data.Y[i] - circle.b;
    		r = sqrt(dx*dx + dy*dy);
    		u = dx/r;
    		v = dy/r;
    		Mr += r;
    		Mu += u;
    		Mv += v;
    		Muu += u*u;
    		Mvv += v*v;
    		Muv += u*v;
    		Muur += u*u/r;
    		Mvvr += v*v/r;
    		Muvr += u*v/r;
    	}
    	Mr /= n;
    	Mu /= n;
    	Mv /= n;
    	Muu /= n;
    	Mvv /= n;
    	Muv /= n;
    	Muur /= n;
    	Mvvr /= n;
    	Muvr /= n;
    	
    	F1 = circle.a + Mu*Mr - data.meanX;
    	F2 = circle.b + Mv*Mr - data.meanY;
    	
    	A11 = One - Mu*Mu - Mr*Mvvr;
    	A22 = One - Mv*Mv - Mr*Muur;
    	A12 = -Mu*Mv + Mr*Muvr;
    }
    else   //  case of a large circle
    {
    	
    	a0 = circle.a-data.meanX;  b0 = circle.b-data.meanY;
    	del = One/sqrt(a0*a0 + b0*b0);  dd = del*del;
    	s = b0*del;  c = a0*del;
    	
    	for (X=Y=R=T=W=AA=BB=AB=AG=BG=GG=0.,i=0; i<n; i++)
    	{
    		x = data.X[i] - data.meanX;
    		y = data.Y[i] - data.meanY;
    		z = x*x + y*y;
    		p = x*c + y*s;
    		t = Two*p-del*z;
    		w = sqrt(One-del*t);
    		g = -t/(One+w);
    		g1 = One/(One+del*g);
    		gg1 = g*g1;
    		gg2 = g/(Two+del*g);
    		a = (x+g*c)/w;
    		b = (y+g*s)/w;
    		X += x*gg1;
    		Y += y*gg1;
    		R += z + t*gg2;
    		T += t*gg1;
    		W += t*gg1*gg2;
    		AA += a*a*g1;
    		BB += b*b*g1;
    		AB += a*b*g1;
    		AG += a*gg1;
    		BG += b*gg1;
    		GG += g*gg1;
    	}
    	X /= n;
    	Y /= n;
    	R /= n;
    	T /= n;
    	W /= n;
    	AA /= n;
    	BB /= n;
    	AB /= n;
    	AG /= n;
    	BG /= n;
    	GG /= n;
    	
    	U = (T-del*W)*c/Two - X + R*c/Two; 
    	V = (T-del*W)*s/Two - Y + R*s/Two;
    	
//         compute the components of the gradient vector
    	
    	F1 = del*((dd*R*U - del*W*c + T*c)/Two - X);
    	F2 = del*((dd*R*V - del*W*s + T*s)/Two - Y);
    	
    	UUR = ((GG-R/Two)*c + Two*(AG-U))*c + AA;
    	VVR = ((GG-R/Two)*s + Two*(BG-V))*s + BB;
    	UVR = ((GG-R/Two)*c + (AG-U))*s + (BG-V)*c + AB;
    	
//         compute the components of the Hessian matrix
    	
    	A11 = dd*(U*(Two*c - dd*U) - R*s*s/Two - VVR*(One + dd*R/Two));
    	A22 = dd*(V*(Two*s - dd*V) - R*c*c/Two - UUR*(One + dd*R/Two));
    	A12 = dd*(U*s + V*c + R*s*c/Two - dd*U*V + UVR*(One + dd*R/Two));
	}
}

int CircleFitByChernovHoussam (Data& data, Circle& circleIni, reals LambdaIni, Circle& circle)
/*                             <------------------ Input ------------------->  <-- Output -->

       Geometric circle fit to a given set of data points (in 2D)
		
       Input:  data     - the class of data (contains the given points):
		
	       data.n   - the number of data points
	       data.X[] - the array of X-coordinates
	       data.Y[] - the array of Y-coordinates
		          
               circleIni - parameters of the initial circle ("initial guess")
		        
	       circleIni.a - the X-coordinate of the center of the initial circle
	       circleIni.b - the Y-coordinate of the center of the initial circle
	       circleIni.r - the radius of the initial circle
		        
	       LambdaIni - the initial value of the control parameter "lambda"
	                   for the Levenberg-Marquardt procedure
	                   (common choice is a small positive number, e.g. 0.001)
		        
       Output:
	       integer function value is a code:
	                  0:  normal termination, the best fitting circle is 
	                      successfully found
	                  1:  the number of outer iterations exceeds the limit (99)
	                      (indicator of a possible divergence)
	                  2:  the number of inner iterations exceeds the limit (99)
	                      (another indicator of a possible divergence)
		          3:  convergence to a point where the Hessian matrix 
		              is NOT positive definite. This indicates that 
		              the fitting circle may correspond to a maximum
		              or to a saddle point of the objective function
		                   
	       circle - parameters of the fitting circle ("best fit")
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.i - the total number of outer iterations (updating the parameters)
 	       circle.j - the total number of inner iterations (adjusting lambda)
 	       
       Algorithm is based on the paper by H. Abdul-Rahman and N. Chernov
       "Fast and numerically stable circle fit" 
       in Journal of Mathematical Imaging and Vision (2013)
          
		Nikolai Chernov,  January 2013
*/
{
    int i,n=data.n,iter,inner,IterMAX=200,check_line=1,code;
    
    reals lambda;
    reals F1,F2,A11,A22,A12,dX,dY,Mxx,Myy,Mxy,Mxxy,dx,dy;
    reals d1,d2,dmin=One,Vx,Vy,dL1,dL2,VLx,VLy,aL,bL,R,G1,G2,sBest,gBest,AB,progress;

//          control parameters (have been optimized empirically):

    reals ParLimit2=100.;
    reals epsilon=1.e+7*REAL_EPSILON;;
    reals factor1=32.,factor2=32.;
    reals ccc=0.4;
    reals factorUp=10.,factorDown=0.1;
    
    Circle Old,New;
    
    data.means();   // Compute x- and y-means (via a function in class "data") 
    
    //    starting with the given initial guess
    
    New = circleIni;            //  initialize the circle
    New.s = SigmaWithLargeCircleOption(data,New);    //  compute the root-mean-square error
    GradientHessian(data,New,F1,F2,A11,A22,A12);  // compute the gradient vector and Hessian matrix
    New.Gx = F1;  New.Gy = F2;   New.g = sqrt(F1*F1 + F2*F2);   //  the gradient vector and its norm 
    
    lambda = LambdaIni;         //    initialize lambda
    iter = inner = code = 0;    //    initialize iteration counters and the exit code
    sBest = gBest = progress = REAL_MAX;   //  set control variables to LARGE values
    //if (lpr==1) cout << iter <<"  ("<<New.a<<","<<New.b<<")  s="<<New.s<<"  g="<< New.g<<"  L="<<lambda << endl;
    
NextIteration:   //  starting point of the current iteration of Newton's method
	
    if (iter>0) progress = (abs(New.a-Old.a)+abs(New.b-Old.b))/(SQR(Old.a)+SQR(Old.b)+One);
               // evaluate the progress made during the previous iteration
    Old = New;
    if (++iter > IterMAX) goto enough;   //  termination due to going over the limit

    eigen2x2(A11,A22,A12,d1,d2,Vx,Vy);  //  eigendecomposition of the Hessian matrix
    dmin =  (d1<d2) ? d1 : d2;          //  recording the smaller e-value
    
	AB=sqrt(SQR(Old.a)+SQR(Old.b)) + One;   //  approximation to the circle size
	
//     main stopping rule: terminate iterations if 
//          the gradient vector is small enough and the progress is not substantial 
    if ((Old.g < factor1*REAL_EPSILON)&&(progress<epsilon))
    {
    		//if (lpr==1) cout << "++++ gradient is small enough ++++" << endl;
    		goto enough;
    }
    
//     secondary stopping rule (prevents some stupid cycling)
    if ((Old.s >= sBest)&&(Old.g >= gBest))
    {
    		//if (lpr==1) cout << "++++ iterations stabilize (best results repeated) ++++" << endl;
    		goto enough;
    }
   
    if (sBest > Old.s) sBest = Old.s;  //  updating the smallest value of the objective function found so far
    if (gBest > Old.g) gBest = Old.g;  //  updating the smallest length of the gradient vector found so far
	
	G1 = Vx*F1 + Vy*F2;  //  rotating the gradient vector
	G2 = Vx*F2 - Vy*F1;  //  (expressing it in the eigensystem of the Hessian matrix)

try_again:   //  starting point of an "inner" iteration (adjusting lambda)

//           enforcing a lower bound on lambda that guarantees that
//           (i)  the augmented Hessian matrix is positive definite
//           (ii) the step is not too big (does not exceed a certain fraction of the circle size)
//                                         the fraction is defined by the factor "ccc")
	if (lambda < abs(G1)/AB/ccc - d1)  lambda = abs(G1)/AB/ccc - d1;
	if (lambda < abs(G2)/AB/ccc - d2)  lambda = abs(G2)/AB/ccc - d2;

//           computing the step (dX,dY) by using the current va;ue of lambda

    dX = Old.Gx*(Vx*Vx/(d1+lambda)+Vy*Vy/(d2+lambda)) + Old.Gy*Vx*Vy*(One/(d1+lambda)-One/(d2+lambda));
    dY = Old.Gx*Vx*Vy*(One/(d1+lambda)-One/(d2+lambda)) + Old.Gy*(Vx*Vx/(d2+lambda)+Vy*Vy/(d1+lambda));
   
//           updating the circle parameters

    New.a = Old.a - dX;
    New.b = Old.b - dY;
    
    if ((New.a==Old.a)&&(New.b==Old.b))   // if no change, terminate iterations    
    {
    		//if (lpr==1) cout << "++++ iterations stabilize (no change in center) ++++" << endl;
    		goto enough;
    }

//       check if the circle is very large

    if (abs(New.a)>ParLimit2 || abs(New.b)>ParLimit2)
    {
//          when the circle is very large for the first time, check if 
//          the best fitting line gives the best fit

    		if (check_line)  //  initially, check_line=1, then it is set to zero
    		{
    			//if (lpr==1) cout << "  Linear zone 1st  iter=" << iter << "  (" << New.a << "," << New.b << ")" << endl;
    		
   //                compute scatter matrix
   
  	  		for (Mxx=Myy=Mxy=0.,i=0; i<n; i++)   
    			{
    				dx = data.X[i] - data.meanX;
    				dy = data.Y[i] - data.meanY;
    				Mxx += dx*dx;
    				Myy += dy*dy;
    				Mxy += dx*dy;
    			}
    	
    			eigen2x2(Mxx,Myy,Mxy,dL1,dL2,VLx,VLy);  //  eigendecomposition of scatter matrix

//                   compute the third mixed moment (after rotation of coordinates)

    			for (Mxxy=0.,i=0; i<n; i++)
    			{
    				dx = (data.X[i] - data.meanX)*VLx + (data.Y[i] - data.meanY)*VLy;
    				dy = (data.Y[i] - data.meanY)*VLx - (data.X[i] - data.meanX)*VLy;
    				Mxxy += dx*dx*dy;
    			}
//              check if the line is the best fit

    			//if (Mxxy == 0.) { cout << "  Line " << endl; cin.ignore(); }  //  need to finish this block...

//              rough estimate of the center to be used later to recoved from the wrong valley

			R = (Mxxy>0.) ? ParLimit2 : -ParLimit2;
			aL = -VLy*R;
			bL =  VLx*R;                 
    			check_line = 0;              //  set to zero to prevent further checks for line 
    		}
		
		if ((New.a*VLy - New.b*VLx)*R>0.)  // check if the circle is in the wrong valley
		{
	    		//if (lpr==1) cout << "  Linear zone foul  iter=" << iter << "  (" << New.a << "," << New.b << ")" << endl;
			New.a = aL;                        //  switch to the rough circle estimate
			New.b = bL;                        //    (precomupted earlier)
			New.s = SigmaWithLargeCircleOption(data,New);           //  compute the root-mean-square error
			GradientHessian(data,New,F1,F2,A11,A22,A12);  // compute the gradient vector and Hessian matrix
			New.Gx = F1;  New.Gy = F2;   New.g = sqrt(F1*F1 + F2*F2);   //  the gradient vector and its norm 
	    		lambda = LambdaIni;                //  reset lambda
	    		sBest = gBest = REAL_MAX;          //  reset best circle characteristics 
	    		//if (lpr==1) cout << "  Linear zone flip  iter=" << iter << "  (" << New.a << "," << New.b << ")" << endl;
			goto NextIteration;      //  restart the Newton's iteration
		}
    }
    	
    New.s = SigmaWithLargeCircleOption(data,New);      //  compute the root-mean-square error
    GradientHessian(data,New,F1,F2,A11,A22,A12);  // compute the gradient vector and Hessian matrix
    New.Gx = F1;  New.Gy = F2;   New.g = sqrt(F1*F1 + F2*F2);   //  the gradient vector and its norm 
    
    //if (lpr==1) cout << setprecision(15)<<iter <<"  ("<<New.a<<","<<New.b<<"  s="<<New.s<<"  g="<< New.g<<"  L="<<lambda<<endl;
    	
//                check if improvement is gained
    	
    if (New.s < sBest*(One+factor2*REAL_EPSILON))    //    yes, improvement
    {
    		lambda *= factorDown;     //  reduce lambda
    		goto NextIteration;       //  proceed to the next Newton's iteration
    }
    else                                             //  no improvement
    {
    		//if (lpr==1) cout << "   repeat with higher lambda" << endl;
    		if (++inner > IterMAX) goto enough;   //  termination due to going over the limit
    		lambda = factorUp*lambda;             //  increace lambda
    		goto try_again;                       //  make another inner iteration                 
    }
    
enough:                   //           exit
	
    Old.r = OptimalRadius(data,Old);
    Old.i = iter;
    Old.j = inner;
    
    circle = Old;    //  output circle class
    
    if (iter  > IterMAX) code = 1;    //  error code set to 1
    if (inner > IterMAX) code = 2;    //  error code set to 2
    if ((dmin <= 0.)&&(code==0)) 
    { 
    		//cout << " negative e-value=" << setprecision(20) << dmin << " iter=" << iter <<"  ("<<Old.a<<","<<Old.b<< ")" <<  endl; 
    		code = 3;     //  error code set to 3
    }
    return code;
}

Kasa fit 

Kasa fit
 Circle CircleFitByKasa (Data& data)
/*  
      Circle fit to a given set of data points (in 2D)
      
      This is an algebraic fit, disovered and rediscovered by many people. 
      One of the earliest publications is due to Kasa:
     
      I. Kasa, "A curve fitting procedure and its error analysis",
      IEEE Trans. Inst. Meas., Vol. 25, pages 8-14, (1976)
		
      Input:  data     - the class of data (contains the given points):
		
	      data.n   - the number of data points
	      data.X[] - the array of X-coordinates
	      data.Y[] - the array of Y-coordinates
     
     Output:	       
               circle - parameters of the fitting circle:
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.j - the total number of iterations
             
     The method is based on the minimization of the function
      
                 F = sum [(x-a)^2 + (y-b)^2 - R^2]^2
                 
     This is perhaps the simplest and fastest circle fit. 
     
     It works well when data points are sampled along an entire circle
     or a large part of it (at least half circle).
     
     It does not work well when data points are sampled along a small arc 
     of a circle. In that case the method is heavily biased, it returns
     circles that are too often too small.

     It is the oldest algebraic circle fit (first published in 1972?).
     For 20-30 years it has been the most popular circle fit, at least
     until the more robust Pratt fit (1987) and Taubin fit (1991) were invented.
     
       Nikolai Chernov  (September 2012)
*/
{
    int i;

    reals Xi,Yi,Zi;
    reals Mxy,Mxx,Myy,Mxz,Myz;
    reals B,C,G11,G12,G22,D1,D2;
    
    Circle circle;

    data.means();   // Compute x- and y- sample means (via a function in the class "data") 

//     computing moments 

    Mxx=Myy=Mxy=Mxz=Myz=0.;

    for (i=0; i<data.n; i++)
    {
        Xi = data.X[i] - data.meanX;   //  centered x-coordinates
        Yi = data.Y[i] - data.meanY;   //  centered y-coordinates
        Zi = Xi*Xi + Yi*Yi;

        Mxx += Xi*Xi;
        Myy += Yi*Yi;
        Mxy += Xi*Yi;
        Mxz += Xi*Zi;
        Myz += Yi*Zi;
    }
    Mxx /= data.n;
    Myy /= data.n;
    Mxy /= data.n;
    Mxz /= data.n;
    Myz /= data.n;

//    solving system of equations by Cholesky factorization

    G11 = sqrt(Mxx);
    G12 = Mxy/G11;
    G22 = sqrt(Myy - G12*G12);

    D1 = Mxz/G11;
    D2 = (Myz - D1*G12)/G22;

//    computing paramters of the fitting circle

    C = D2/G22/Two;
    B = (D1 - G12*C)/G11/Two;

//       assembling the output

    circle.a = B + data.meanX;
    circle.b = C + data.meanY;
    circle.r = sqrt(B*B + C*C + Mxx + Myy);
    circle.s = Sigma(data,circle);
    circle.i = 0;
    circle.j = 0;
    
    return circle;
}

Pratt fit

Pratt fit
 Circle CircleFitByPratt (Data& data)
/*  
      Circle fit to a given set of data points (in 2D)
      
      This is an algebraic fit, due to Pratt, based on the journal article
     
      V. Pratt, "Direct least-squares fitting of algebraic surfaces",
      Computer Graphics, Vol. 21, pages 145-152 (1987)
		
      Input:  data     - the class of data (contains the given points):
		
	      data.n   - the number of data points
	      data.X[] - the array of X-coordinates
	      data.Y[] - the array of Y-coordinates
     
     Output:	       
               circle - parameters of the fitting circle:
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.j - the total number of iterations
             
     The method is based on the minimization of the function
     
                 F = sum [(x-a)^2 + (y-b)^2 - R^2]^2 / R^2
                 
     This method is more balanced than the simple Kasa fit.
        
     It works well whether data points are sampled along an entire circle or
     along a small arc. 
     
     It still has a small bias and its statistical accuracy is slightly
     lower than that of the geometric fit (minimizing geometric distances). 
     
     It provides a good initial guess for a subsequent geometric fit. 
     
       Nikolai Chernov  (September 2012)

*/
{
    int i,iter,IterMAX=99;

    reals Xi,Yi,Zi;
    reals Mz,Mxy,Mxx,Myy,Mxz,Myz,Mzz,Cov_xy,Var_z;
    reals A0,A1,A2,A22;
    reals Dy,xnew,x,ynew,y;
    reals DET,Xcenter,Ycenter;
    
    Circle circle;

    data.means();   // Compute x- and y- sample means (via a function in the class "data") 

//     computing moments 

    Mxx=Myy=Mxy=Mxz=Myz=Mzz=0.;

    for (i=0; i<data.n; i++)
    {
        Xi = data.X[i] - data.meanX;   //  centered x-coordinates
        Yi = data.Y[i] - data.meanY;   //  centered y-coordinates
        Zi = Xi*Xi + Yi*Yi;

        Mxy += Xi*Yi;
        Mxx += Xi*Xi;
        Myy += Yi*Yi;
        Mxz += Xi*Zi;
        Myz += Yi*Zi;
        Mzz += Zi*Zi;
    }
    Mxx /= data.n;
    Myy /= data.n;
    Mxy /= data.n;
    Mxz /= data.n;
    Myz /= data.n;
    Mzz /= data.n;

//    computing coefficients of the characteristic polynomial

    Mz = Mxx + Myy;
    Cov_xy = Mxx*Myy - Mxy*Mxy;
    Var_z = Mzz - Mz*Mz;

    A2 = Four*Cov_xy - Three*Mz*Mz - Mzz;
    A1 = Var_z*Mz + Four*Cov_xy*Mz - Mxz*Mxz - Myz*Myz;
    A0 = Mxz*(Mxz*Myy - Myz*Mxy) + Myz*(Myz*Mxx - Mxz*Mxy) - Var_z*Cov_xy;
    A22 = A2 + A2;

//    finding the root of the characteristic polynomial
//    using Newton's method starting at x=0  
//     (it is guaranteed to converge to the right root)

    for (x=0.,y=A0,iter=0; iter<IterMAX; iter++)  // usually, 4-6 iterations are enough
    {
        Dy = A1 + x*(A22 + 16.*x*x);
        xnew = x - y/Dy;
        if ((xnew == x)||(!isfinite(xnew))) break;
        ynew = A0 + xnew*(A1 + xnew*(A2 + Four*xnew*xnew));
        if (abs(ynew)>=abs(y))  break;
        x = xnew;  y = ynew;
    }

//    computing paramters of the fitting circle

    DET = x*x - x*Mz + Cov_xy;
    Xcenter = (Mxz*(Myy - x) - Myz*Mxy)/DET/Two;
    Ycenter = (Myz*(Mxx - x) - Mxz*Mxy)/DET/Two;

//       assembling the output

    circle.a = Xcenter + data.meanX;
    circle.b = Ycenter + data.meanY;
    circle.r = sqrt(Xcenter*Xcenter + Ycenter*Ycenter + Mz + x + x);
    circle.s = Sigma(data,circle);
    circle.i = 0;
    circle.j = iter;  //  return the number of iterations, too
    
    return circle;
}

Taubin fit  

Taubin fit
 Circle CircleFitByTaubin (Data& data)
/*  
      Circle fit to a given set of data points (in 2D)
      
      This is an algebraic fit, due to Taubin, based on the journal article
     
      G. Taubin, "Estimation Of Planar Curves, Surfaces And Nonplanar
                  Space Curves Defined By Implicit Equations, With 
                  Applications To Edge And Range Image Segmentation",
                  IEEE Trans. PAMI, Vol. 13, pages 1115-1138, (1991)

      Input:  data     - the class of data (contains the given points):
		
	      data.n   - the number of data points
	      data.X[] - the array of X-coordinates
	      data.Y[] - the array of Y-coordinates
     
     Output:	       
               circle - parameters of the fitting circle:
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.j - the total number of iterations
             
     The method is based on the minimization of the function
     
                  sum [(x-a)^2 + (y-b)^2 - R^2]^2
              F = -------------------------------
                      sum [(x-a)^2 + (y-b)^2]
                 
     This method is more balanced than the simple Kasa fit.
        
     It works well whether data points are sampled along an entire circle or
     along a small arc. 
     
     It still has a small bias and its statistical accuracy is slightly
     lower than that of the geometric fit (minimizing geometric distances),
     but slightly higher than that of the very similar Pratt fit. 
     Besides, the Taubin fit is slightly simpler than the Pratt fit
     
     It provides a very good initial guess for a subsequent geometric fit. 
     
       Nikolai Chernov  (September 2012)

*/
{
    int i,iter,IterMAX=99;
    
    reals Xi,Yi,Zi;
    reals Mz,Mxy,Mxx,Myy,Mxz,Myz,Mzz,Cov_xy,Var_z;
    reals A0,A1,A2,A22,A3,A33;
    reals Dy,xnew,x,ynew,y;
    reals DET,Xcenter,Ycenter;
    
    Circle circle;
    
    data.means();   // Compute x- and y- sample means (via a function in the class "data")

//     computing moments 

	Mxx=Myy=Mxy=Mxz=Myz=Mzz=0.;
    
    for (i=0; i<data.n; i++)
    {
        Xi = data.X[i] - data.meanX;   //  centered x-coordinates
        Yi = data.Y[i] - data.meanY;   //  centered y-coordinates
        Zi = Xi*Xi + Yi*Yi;
        
        Mxy += Xi*Yi;
        Mxx += Xi*Xi;
        Myy += Yi*Yi;
        Mxz += Xi*Zi;
        Myz += Yi*Zi;
        Mzz += Zi*Zi;
    }
    Mxx /= data.n;
    Myy /= data.n;
    Mxy /= data.n;
    Mxz /= data.n;
    Myz /= data.n;
    Mzz /= data.n;
    
//      computing coefficients of the characteristic polynomial
    
    Mz = Mxx + Myy;
    Cov_xy = Mxx*Myy - Mxy*Mxy;
    Var_z = Mzz - Mz*Mz;
    A3 = Four*Mz;
    A2 = -Three*Mz*Mz - Mzz;
    A1 = Var_z*Mz + Four*Cov_xy*Mz - Mxz*Mxz - Myz*Myz;
    A0 = Mxz*(Mxz*Myy - Myz*Mxy) + Myz*(Myz*Mxx - Mxz*Mxy) - Var_z*Cov_xy;
    A22 = A2 + A2;
    A33 = A3 + A3 + A3;

//    finding the root of the characteristic polynomial
//    using Newton's method starting at x=0  
//     (it is guaranteed to converge to the right root)
    
    for (x=0.,y=A0,iter=0; iter<IterMAX; iter++)  // usually, 4-6 iterations are enough
    {
    	    Dy = A1 + x*(A22 + A33*x);
        xnew = x - y/Dy;
        if ((xnew == x)||(!isfinite(xnew))) break;
        ynew = A0 + xnew*(A1 + xnew*(A2 + xnew*A3));
        if (abs(ynew)>=abs(y))  break;
        x = xnew;  y = ynew;
    }
     
//       computing paramters of the fitting circle
    
    DET = x*x - x*Mz + Cov_xy;
    Xcenter = (Mxz*(Myy - x) - Myz*Mxy)/DET/Two;
    Ycenter = (Myz*(Mxx - x) - Mxz*Mxy)/DET/Two;

//       assembling the output

    circle.a = Xcenter + data.meanX;
    circle.b = Ycenter + data.meanY;
    circle.r = sqrt(Xcenter*Xcenter + Ycenter*Ycenter + Mz);
    circle.s = Sigma(data,circle);
    circle.i = 0;
    circle.j = iter;  //  return the number of iterations, too
    
    return circle;
}

Hyper fit  

Hyper fit
 Circle CircleFitByHyper (Data& data)
/*  
      Circle fit to a given set of data points (in 2D)
      
      This is an algebraic fit based on the journal article
     
      A. Al-Sharadqah and N. Chernov, "Error analysis for circle fitting algorithms",
      Electronic Journal of Statistics, Vol. 3, pages 886-911, (2009)
      
      It is an algebraic circle fit with "hyperaccuracy" (with zero essential bias). 
      The term "hyperaccuracy" first appeared in papers by Kenichi Kanatani around 2006
		
      Input:  data     - the class of data (contains the given points):
		
	      data.n   - the number of data points
	      data.X[] - the array of X-coordinates
	      data.Y[] - the array of Y-coordinates
     
     Output:	       
               circle - parameters of the fitting circle:
		        
	       circle.a - the X-coordinate of the center of the fitting circle
	       circle.b - the Y-coordinate of the center of the fitting circle
 	       circle.r - the radius of the fitting circle
 	       circle.s - the root mean square error (the estimate of sigma)
 	       circle.j - the total number of iterations

     This method combines the Pratt and Taubin fits to eliminate the essential bias.
        
     It works well whether data points are sampled along an entire circle or
     along a small arc. 
     
     Its statistical accuracy is theoretically higher than that of the Pratt fit 
     and Taubin fit, but practically they all return almost identical circles
     (unlike the Kasa fit that may be grossly inaccurate). 
     
     It provides a very good initial guess for a subsequent geometric fit. 
     
       Nikolai Chernov  (September 2012)

*/
{
    int i,iter,IterMAX=99;

    reals Xi,Yi,Zi;
    reals Mz,Mxy,Mxx,Myy,Mxz,Myz,Mzz,Cov_xy,Var_z;
    reals A0,A1,A2,A22;
    reals Dy,xnew,x,ynew,y;
    reals DET,Xcenter,Ycenter;
    
    Circle circle;

    data.means();   // Compute x- and y- sample means (via a function in the class "data") 

//     computing moments 

    Mxx=Myy=Mxy=Mxz=Myz=Mzz=0.;

    for (i=0; i<data.n; i++)
    {
        Xi = data.X[i] - data.meanX;   //  centered x-coordinates
        Yi = data.Y[i] - data.meanY;   //  centered y-coordinates
        Zi = Xi*Xi + Yi*Yi;

        Mxy += Xi*Yi;
        Mxx += Xi*Xi;
        Myy += Yi*Yi;
        Mxz += Xi*Zi;
        Myz += Yi*Zi;
        Mzz += Zi*Zi;
    }
    Mxx /= data.n;
    Myy /= data.n;
    Mxy /= data.n;
    Mxz /= data.n;
    Myz /= data.n;
    Mzz /= data.n;

//    computing the coefficients of the characteristic polynomial

    Mz = Mxx + Myy;
    Cov_xy = Mxx*Myy - Mxy*Mxy;
    Var_z = Mzz - Mz*Mz;

    A2 = Four*Cov_xy - Three*Mz*Mz - Mzz;
    A1 = Var_z*Mz + Four*Cov_xy*Mz - Mxz*Mxz - Myz*Myz;
    A0 = Mxz*(Mxz*Myy - Myz*Mxy) + Myz*(Myz*Mxx - Mxz*Mxy) - Var_z*Cov_xy;
    A22 = A2 + A2;

//    finding the root of the characteristic polynomial
//    using Newton's method starting at x=0  
//     (it is guaranteed to converge to the right root)

	for (x=0.,y=A0,iter=0; iter<IterMAX; iter++)  // usually, 4-6 iterations are enough
    {
        Dy = A1 + x*(A22 + 16.*x*x);
        xnew = x - y/Dy;
        if ((xnew == x)||(!isfinite(xnew))) break;
        ynew = A0 + xnew*(A1 + xnew*(A2 + Four*xnew*xnew));
        if (abs(ynew)>=abs(y))  break;
        x = xnew;  y = ynew;
    }

//    computing paramters of the fitting circle

    DET = x*x - x*Mz + Cov_xy;
    Xcenter = (Mxz*(Myy - x) - Myz*Mxy)/DET/Two;
    Ycenter = (Myz*(Mxx - x) - Mxz*Mxy)/DET/Two;

//       assembling the output

    circle.a = Xcenter + data.meanX;
    circle.b = Ycenter + data.meanY;
    circle.r = sqrt(Xcenter*Xcenter + Ycenter*Ycenter + Mz - x - x);
    circle.s = Sigma(data,circle);
    circle.i = 0;
    circle.j = iter;  //  return the number of iterations, too
    
    return circle;
}

Data class (for data points)

Data class (for data points)
 //
//						 data.h
//

/************************************************************************
			DECLARATION OF THE CLASS DATA
************************************************************************/
// Class for Data
// A data has 5 fields: 
//       n (of type int), the number of data points 
//       X and Y (arrays of type reals), arrays of x- and y-coordinates
//       meanX and meanY (of type reals), coordinates of the centroid (x and y sample means)

class Data
{
public:

	int n;
	reals *X;		//space is allocated in the constructors
	reals *Y;		//space is allocated in the constructors
	reals meanX, meanY;

	// constructors
	Data();
	Data(int N);
	Data(int N, reals X[], reals Y[]);

	// routines
	void means(void);
	void center(void);
	void scale(void);
	void print(void);

	// destructors
	~Data();
};


/************************************************************************
			BODY OF THE MEMBER ROUTINES
************************************************************************/
// Default constructor
Data::Data()
{
	n=0;
	X = new reals[n];
	Y = new reals[n];
	for (int i=0; i<n; i++)
	{
		X[i]=0.;
		Y[i]=0.;
	}
}

// Constructor with assignment of the field N
Data::Data(int N)
{
	n=N;
	X = new reals[n];
	Y = new reals[n];

	for (int i=0; i<n; i++)
	{
		X[i]=0.;
		Y[i]=0.;
	}
}

// Constructor with assignment of each field
Data::Data(int N, reals dataX[], reals dataY[])
{
	n=N;
	X = new reals[n];
	Y = new reals[n];

	for (int i=0; i<n; i++)
	{
		X[i]=dataX[i];
		Y[i]=dataY[i];
	}
}

// Routine that computes the x- and y- sample means (the coordinates of the centeroid)

void Data::means(void)
{
	meanX=0.; meanY=0.;
	
	for (int i=0; i<n; i++)
	{
		meanX += X[i];
		meanY += Y[i];
	}
	meanX /= n;
	meanY /= n;
}

// Routine that centers the data set (shifts the coordinates to the centeroid)

void Data::center(void)
{
	reals sX=0.,sY=0.;  
	int i;
	
	for (i=0; i<n; i++)
	{
		sX += X[i];
		sY += Y[i];
	}
	sX /= n;
	sY /= n;
	
	for (i=0; i<n; i++)
	{
		X[i] -= sX;
		Y[i] -= sY;
	}
	meanX = 0.;
	meanY = 0.;
}

// Routine that scales the coordinates (makes them of order one)

void Data::scale(void)
{
	reals sXX=0.,sYY=0.,scaling;  
	int i;
	
	for (i=0; i<n; i++)
	{
		sXX += X[i]*X[i];
		sYY += Y[i]*Y[i];
	}
	scaling = sqrt((sXX+sYY)/n/Two);
	
	for (i=0; i<n; i++)
	{
		X[i] /= scaling;
		Y[i] /= scaling;
	}
}

// Printing routine

void Data::print(void)
{
	cout << endl << "The data set has " << n << " points with coordinates :"<< endl;
	
	for (int i=0; i<n-1; i++) cout << setprecision(7) << "(" << X[i] << ","<< Y[i] << "), ";
	
	cout << "(" << X[n-1] << ","<< Y[n-1] << ")\n";
}

// Destructor
Data::~Data()
{
	delete[] X;
        delete[] Y;
}

Circle class (for circle parameters) 

查看代码
 //
//						 circle.h
//
/************************************************************************
			DECLARATION OF THE CLASS CIRCLE
************************************************************************/
// Class for Circle
// A circle has 7 fields: 
//     a, b, r (of type reals), the circle parameters
//     s (of type reals), the estimate of sigma (standard deviation)
//     g (of type reals), the norm of the gradient of the objective function
//     i and j (of type int), the iteration counters (outer and inner, respectively)

class Circle
{
public:

	// The fields of a Circle
	reals a, b, r, s, g, Gx, Gy;
	int i, j;

	// constructors
	Circle();
	Circle(reals aa, reals bb, reals rr);

	// routines
	void print(void);

	// no destructor we didn't allocate memory by hand.
};


/************************************************************************
			BODY OF THE MEMBER ROUTINES
************************************************************************/
// Default constructor

Circle::Circle()
{
	a=0.; b=0.; r=1.; s=0.; i=0; j=0;
}

// Constructor with assignment of the circle parameters only

Circle::Circle(reals aa, reals bb, reals rr)
{
	a=aa; b=bb; r=rr;
}

// Printing routine

void Circle::print(void)
{
	cout << endl;
	cout << setprecision(10) << "center (" <<a <<","<< b <<")  radius "
		 << r << "  sigma " << s << "  gradient " << g << "  iter "<< i << "  inner " << j << endl;
}

Top header file

查看代码
 #include <iostream>
#include <cmath>
#include <limits>
#include <iomanip>
#include <cstdlib>

using namespace std;

//  define precision by commenting out one of the two lines:

typedef double reals;       //  defines reals as double (standard for scientific calculations)
//typedef long double reals;  //  defines reals as long double 

//   Note: long double is an 80-bit format (more accurate, but more memory demanding and slower)

typedef long long integers;

//   next define some frequently used constants:

const reals One=1.0,Two=2.0,Three=3.0,Four=4.0,Five=5.0,Six=6.0,Ten=10.0;
//const reals One=1.0L,Two=2.0L,Three=3.0L,Four=4.0L,Five=5.0L,Six=6.0L,Ten=10.0L;
const reals Pi=3.141592653589793238462643383L;
const reals REAL_MAX=numeric_limits<reals>::max();
const reals REAL_MIN=numeric_limits<reals>::min();
const reals REAL_EPSILON=numeric_limits<reals>::epsilon();

//   next define some frequently used functions:

template<typename T>
inline T SQR(T t) { return t*t; };

reals pythag(reals a, reals b)
{
	reals absa=abs(a),	absb=abs(b);
	if (absa > absb) return absa*sqrt(One+SQR(absb/absa));
	else return (absb == 0.0 ? 0.0 : absb*sqrt(One+SQR(absa/absb)));
}

//#include "RandomNormalPair.cpp"   //  routine for generating normal random numbers

Auxiliary functions

查看代码
 void RandomNormalPair( reals& x, reals& y )
/*
    Generator of pseudo-random numbers
    with standard normal distribution
    based on Box-Muller transformation
    
    "reals" can be replaced by "float", 
    "double", or "long double"; or it 
    can be predefined as one of these types
    
    Input:  none
    Output:  two real values, x and y,
    that are random, independent, and 
    have standard normal distribution 
    (with mean 0 and variance 1)
    
    Call:
    
        RandomNormalPair(x,y);
        
    Uses standard C++ random generator rand()
    
    To reseed the generator rand(), call  
    
        srand ( (unsigned)time(NULL) );
    
    before you start calling this function
    
       Nikolai Chernov, November 2011
*/
{
    reals rand1,rand2,wrand;
/*
//       version 1, by direct calculation (slower)
       
    reals pi=3.141592653589793;
    rand1 = (reals)rand()/RAND_MAX;
    rand2 = (reals)rand()/RAND_MAX;
    x = sqrt(-Two*log(rand1))*cos(Two*pi*rand2);
    y = sqrt(-Two*log(rand1))*sin(Two*pi*rand2);
*/
//       version 2, in polar form 
//         (faster and more stable numerically)

    do {
         rand1 = Two*rand()/RAND_MAX - One;
         rand2 = Two*rand()/RAND_MAX - One;
         wrand = rand1*rand1 + rand2*rand2;
       } while (wrand >= One);
    wrand = sqrt( (-Two*log(wrand) ) / wrand );
    x = rand1 * wrand;
    y = rand2 * wrand;

}

//*********************** SimulateArc ******************************

void SimulateArc(Data& data, reals a, reals b, reals R, reals theta1, reals theta2, reals sigma)
/*    
          Simulate data points equally spaced along a circular arc with Gaussian noise

  input:
          a,b         the coordinates of the center of the circle 
          R           the radius of circle
          theta1      first  endpoint of the arc (in radians)
          theta2      second endpoint of the arc (in radians)
          sigma       noise level (standard deviation of residuals)
*/
{
	int N=data.n; reals theta,dx,dy;
	
	for (int i=0; i<N; i++)
	{
		theta = theta1 + (theta2 - theta1)*i/(N-1);

//			isotropic Gaussian noise

		RandomNormalPair(dx,dy);
		data.X[i] = a + R*cos(theta) + sigma*dx;
		data.Y[i] = b + R*sin(theta) + sigma*dy;
	}
}

//********************* SimulateRandom ****************************

void SimulateRandom(Data& data, reals Window)
/*    
          Simulate data points with uniform distribution
          in the square |x|<Window, |y|<Window

  input:
          nPoints  the number of data points
*/
{
	//Data data(nPoints);

	for (int i=0; i<data.n; i++)
	{
		data.X[i] = Window*(Two*rand()/RAND_MAX - One);
		data.Y[i] = Window*(Two*rand()/RAND_MAX - One);
	}
}

//****************** Sigma ************************************
//
//   estimate of Sigma = square root of RSS divided by N
//   gives the root-mean-square error of the geometric circle fit

reals Sigma (Data& data, Circle& circle)
{
    reals sum=0.,dx,dy;

    for (int i=0; i<data.n; i++)
    {
        dx = data.X[i] - circle.a;
        dy = data.Y[i] - circle.b;
        sum += SQR(sqrt(dx*dx+dy*dy) - circle.r);
    }
    return sqrt(sum/data.n);
}

//****************** SigmaReduced ************************************
//
//   estimate of Sigma = square root of RSS divided by N
//   gives the root-mean-square error of the geometric circle fit
//
//   uses only the center of the circle (a,b), not the radius
//   the function computes the optimal radius here

reals SigmaReduced (Data& data, Circle& circle)
{
    int i,n=data.n;
    reals sum=0.,dx,dy,r,D[n];

    for (i=0; i<n; i++)
    {
        dx = data.X[i] - circle.a;
        dy = data.Y[i] - circle.b;
        D[i] = sqrt(dx*dx+dy*dy);
        sum += D[i];
    }
    r = sum/n;

    for (sum=0., i=0; i<n; i++)  sum += SQR(D[i] - r);
    
    return sqrt(sum/n);
}

//****************** SigmaReducedNearLinearCase ****************
//
//   estimate of Sigma = square root of RSS divided by N

reals SigmaReducedNearLinearCase (Data& data, Circle& circle)
{
    int i,n=data.n;
	reals a0,b0,del,s,c,x,y,z,p,t,g,W,Z;
	
	a0 = circle.a-data.meanX;  b0 = circle.b-data.meanY;
	del = One/sqrt(a0*a0 + b0*b0);
	s = b0*del;  c = a0*del;
	
	for (W=Z=0.,i=0; i<n; i++)
	{
		x = data.X[i] - data.meanX;
		y = data.Y[i] - data.meanY;
		z = x*x + y*y;
		p = x*c + y*s;
		t = del*z - Two*p;
		g = t/(One+sqrt(One+del*t));
		W += (z+p*g)/(Two+del*g);
		Z += z;
	}
	W /= n;
	Z /= n;
  
    return sqrt(Z-W*(Two+del*del*W));
}

//****************** SigmaReducedForCenteredScaled ****************
//
//   estimate of Sigma = square root of RSS divided by N

reals SigmaReducedForCenteredScaled (Data& data, Circle& circle)
{
    int i,n=data.n;
    reals sum=0.,dx,dy,r;

    for (i=0; i<n; i++)
    {
        dx = data.X[i] - circle.a;
        dy = data.Y[i] - circle.b;
        sum += sqrt(dx*dx+dy*dy);
    }
    r = sum/n;
  
    return sqrt(SQR(circle.a)+SQR(circle.b)-r*r+Two);
}

//****************** OptimalRadius ******************************
//
//     compute the optimal radius of a circle, given its center (a,b)

reals OptimalRadius (Data& data, Circle& circle)
{
    reals Mr=0.,dx,dy;

    for (int i=0; i<data.n; i++)
    {
        dx = data.X[i] - circle.a;
        dy = data.Y[i] - circle.b;
        Mr += sqrt(dx*dx + dy*dy);
    }
    return Mr/data.n;
}
posted @   哈库拉  阅读(139)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 零经验选手,Compose 一天开发一款小游戏!
· 通过 API 将Deepseek响应流式内容输出到前端
· AI Agent开发,如何调用三方的API Function,是通过提示词来发起调用的吗
点击右上角即可分享
微信分享提示