该算法对应有各种语言版本:python、c++、MATLAB等


 

COMPASS_SEARCH, a Python library which seeks the minimizer of a scalar function of several variables using compass search, a direct search algorithm that does not use derivatives.

The algorithm, which goes back to Fermi and Metropolis, is easy to describe. The algorithm begins with a starting point X, and a step size DELTA.

For each dimension I, the algorithm considers perturbing X(I) by adding or subtracting DELTA.

If a perturbation is found which decreases the function, this becomes the new X. Otherwise DELTA is halved.

The iteration halts when DELTA reaches a minimal value.

The algorithm is not guaranteed to find a global minimum. It can, for instance, easily be attracted to a local minimum. Moreover, the algorithm can diverge if, for instance, the function decreases as the argument goes to infinity.

译文:COMPASS_SEARCH是一个Python库,它使用罗盘搜索寻求多个变量的标量函数的最小化,这是一种不使用导数的直接搜索算法。

 
   

 

代码及解读:

 
   1 #! /usr/bin/env python3
   2 #
   3 def beale ( m, x ):
   4 
   5 #*****************************************************************************80
   6 #
   7 ## BEALE computes the Beale function.
   8 #
   9 #  Discussion:
  10 #
  11 #    This function has a global minimizer:
  12 #
  13 #      X = ( 3.0, 0.5 )
  14 #
  15 #    for which
  16 #
  17 #      F(X) = 0.
  18 #
  19 #    For a relatively easy computation, start the search at
  20 #
  21 #      X = ( 1.0, 1.0 )
  22 #
  23 #    A harder computation starts at
  24 #
  25 #      X = ( 1.0, 4.0 )
  26 #
  27 #  Licensing:
  28 #
  29 #    This code is distributed under the GNU LGPL license.
  30 #
  31 #  Modified:
  32 #
  33 #    23 January 2016
  34 #
  35 #  Author:
  36 #
  37 #    John Burkardt
  38 #
  39 #  Reference:
  40 #
  41 #    Evelyn Beale,
  42 #    On an Iterative Method for Finding a Local Minimum of a Function
  43 #    of More than One Variable,
  44 #    Technical Report 25, 
  45 #    Statistical Techniques Research Group,
  46 #    Princeton University, 1958.
  47 #
  48 #    Richard Brent,
  49 #    Algorithms for Minimization with Derivatives,
  50 #    Dover, 2002,
  51 #    ISBN: 0-486-41998-3,
  52 #    LC: QA402.5.B74.
  53 #
  54 #  Parameters:
  55 #
  56 #    Input, integer M, the number of variables.
  57 #
  58 #    Input, real X(M), the argument of the function.
  59 #
  60 #    Output, real F, the value of the function at X.
  61 #
  62   f1 = 1.5   - x[0] * ( 1.0 - x[1]      )
  63   f2 = 2.25  - x[0] * ( 1.0 - x[1] ** 2 )
  64   f3 = 2.625 - x[0] * ( 1.0 - x[1] ** 3 )
  65 
  66   f = f1 ** 2 + f2 ** 2 + f3 ** 2
  67  
  68   return f
  69 
  70 def beale_test ( ):
  71 
  72 #*****************************************************************************80
  73 #
  74 #% BEALE_TEST works with the Beale function.
  75 #
  76 #  Licensing:
  77 #
  78 #    This code is distributed under the GNU LGPL license.
  79 #
  80 #  Modified:
  81 #
  82 #    23 January 2016
  83 #
  84 #  Author:
  85 #
  86 #    John Burkardt
  87 #
  88   import numpy as np
  89   import platform
  90 
  91   print ( '' )
  92   print ( 'BEALE_TEST:' )
  93   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
  94   print ( '  Test COMPASS_SEARCH with the Beale function.' )
  95   m = 2
  96   delta_tol = 0.00001
  97   delta = 0.1
  98   k_max = 20000
  99 
 100   x = np.array ( [ 1.0, 1.0 ] )
 101   r8vec_print ( m, x, '  Initial point X0:' )
 102   print ( '' )
 103   print ( '  F(X0) = %g' % ( beale ( m, x ) ) )
 104 
 105   x, fx, k = compass_search ( beale, m, x, delta_tol, delta, k_max )
 106   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 107   print ( '' )
 108   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 109 #
 110 #  Repeat with more difficult start.
 111 #
 112   x = np.array ( [ 1.0, 4.0 ] )
 113   r8vec_print ( m, x, '  Initial point X0:' )
 114   print ( '' )
 115   print ( '  F(X0) = %g' % ( beale ( m, x ) ) )
 116 
 117   x, fx, k = compass_search ( beale, m, x, delta_tol, delta, k_max )
 118   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 119   print ( '' )
 120   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 121 #
 122 #  Demonstrate correct minimizer.
 123 #
 124   x = np.array ( [ 3.0, 0.5 ] )
 125   r8vec_print ( m, x, '  Correct minimizer X*:' )
 126   print ( '' )
 127   print ( '  F(X*) = %g' % ( beale ( m, x ) ) )
 128 #
 129 #  Terminate.
 130 #
 131   print ( '' )
 132   print ( 'BEALE_TEST' )
 133   print ( '  Normal end of execution.' )
 134   return
 135 
 136 def bohach1 ( m, x ):
 137 
 138 #*****************************************************************************80
 139 #
 140 ## BOHACH1 evaluates the Bohachevsky function #1.
 141 #
 142 #  Discussion:
 143 #
 144 #    The minimizer is
 145 #
 146 #      X* = [ 0.0, 0.0 ]
 147 #      F(X*) = 0.0
 148 #
 149 #    Suggested starting point:
 150 #
 151 #      X = [ 0.5, 1.0 ]
 152 #
 153 #  Licensing:
 154 #
 155 #    This code is distributed under the GNU LGPL license.
 156 #
 157 #  Modified:
 158 #
 159 #    23 January 2016
 160 #
 161 #  Author:
 162 #
 163 #    John Burkardt
 164 #
 165 #  Reference:
 166 #
 167 #    Zbigniew Michalewicz,
 168 #    Genetic Algorithms + Data Structures = Evolution Programs,
 169 #    Third Edition,
 170 #    Springer Verlag, 1996,
 171 #    ISBN: 3-540-60676-9,
 172 #    LC: QA76.618.M53.
 173 #
 174 #  Parameters:
 175 #
 176 #    Input, integer M, the number of variables.
 177 #
 178 #    Input, real X(M), the argument of the function.
 179 #
 180 #    Output, real F, the value of the function at X.
 181 #
 182   import numpy as np
 183 
 184   f =       x[0] * x[0] - 0.3 * np.cos ( 3.0 * np.pi * x[0] ) \
 185     + 2.0 * x[1] * x[1] - 0.4 * np.cos ( 4.0 * np.pi * x[1] ) \
 186     + 0.7
 187 
 188   return f
 189 
 190 def bohach1_test ( ):
 191 
 192 #*****************************************************************************80
 193 #
 194 ## BOHACH1_TEST works with the Bohachevsky function #1.
 195 #
 196 #  Licensing:
 197 #
 198 #    This code is distributed under the GNU LGPL license.
 199 #
 200 #  Modified:
 201 #
 202 #    23 January 2016
 203 #
 204 #  Author:
 205 #
 206 #    John Burkardt
 207 #
 208   import numpy as np
 209   import platform
 210 
 211   print ( '' )
 212   print ( 'BOHACH1_TEST:' )
 213   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
 214   print ( '  Test COMPASS_SEARCH with the Bohachevsky function #1' )
 215   m = 2
 216   delta_tol = 0.00001
 217   delta = 0.3
 218   k_max = 20000
 219 
 220   x = np.array ( [ 0.5, 1.0 ] )
 221   r8vec_print ( m, x, '  Initial point X0:' )
 222   print ( '' )
 223   print ( '  F(X0) = %g' % ( bohach1 ( m, x ) ) )
 224 
 225   x, fx, k = compass_search ( bohach1, m, x, delta_tol, delta, k_max )
 226   r8vec_print ( m, x, '  Estimated minimizer X[1]:' )
 227   print ( '' )
 228   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 229 #
 230 #  Demonstrate correct minimizer.
 231 #
 232   x = np.array ( [ 0.0, 0.0 ] )
 233   r8vec_print ( m, x, '  Correct minimizer X*:' )
 234   print ( '' )
 235   print ( '  F(X*) = %g' % ( bohach1 ( m, x ) ) )
 236 #
 237 #  Terminate.
 238 #
 239   print ( '' )
 240   print ( 'BOHACH1_TEST' )
 241   print ( '  Normal end of execution.' )
 242   return
 243 
 244 def bohach2 ( m, x ):
 245 
 246 #*****************************************************************************80
 247 #
 248 ## BOHACH2 evaluates the Bohachevsky function #2.
 249 #
 250 #  Discussion:
 251 #
 252 #    The minimizer is
 253 #
 254 #      X* = [ 0.0, 0.0 ]
 255 #      F(X*) = 0.0
 256 #
 257 #    Suggested starting point:
 258 #
 259 #      X = [ 0.6, 1.3 ]
 260 #
 261 #  Licensing:
 262 #
 263 #    This code is distributed under the GNU LGPL license.
 264 #
 265 #  Modified:
 266 #
 267 #    23 January 2016
 268 #
 269 #  Author:
 270 #
 271 #    John Burkardt
 272 #
 273 #  Reference:
 274 #
 275 #    Zbigniew Michalewicz,
 276 #    Genetic Algorithms + Data Structures = Evolution Programs,
 277 #    Third Edition,
 278 #    Springer Verlag, 1996,
 279 #    ISBN: 3-540-60676-9,
 280 #    LC: QA76.618.M53.
 281 #
 282 #  Parameters:
 283 #
 284 #    Input, integer M, the number of variables.
 285 #
 286 #    Input, real X(M), the argument of the function.
 287 #
 288 #    Output, real F, the value of the function at X.
 289 #
 290   import numpy as np
 291 
 292   f =       x[0] * x[0] \
 293     + 2.0 * x[1] * x[1] \
 294     - 0.3 * np.cos ( 3.0 * np.pi * x[0] ) * np.cos ( 4.0 * np.pi * x[1] ) \
 295     + 0.3
 296 
 297   return f
 298 
 299 def bohach2_test ( ):
 300 
 301 #*****************************************************************************80
 302 #
 303 ## BOHACH2_TEST works with the Bohachevsky function #2.
 304 #
 305 #  Licensing:
 306 #
 307 #    This code is distributed under the GNU LGPL license.
 308 #
 309 #  Modified:
 310 #
 311 #    23 January 2016
 312 #
 313 #  Author:
 314 #
 315 #    John Burkardt
 316 #
 317   import numpy as np
 318   import platform
 319 
 320   print ( '' )
 321   print ( 'BOHACH2_TEST:' )
 322   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
 323   print ( '  Test COMPASS_SEARCH with the Bohachevsky function #2.' )
 324   m = 2
 325   delta_tol = 0.00001
 326   delta = 0.3
 327   k_max = 20000
 328 
 329   x = np.array ( [ 0.6, 1.3 ] )
 330   r8vec_print ( m, x, '  Initial point X0:' )
 331   print ( '' )
 332   f = bohach2 ( m, x )
 333   print ( '  F(X0) = %g' % ( f ) )
 334 
 335   x, fx, k = compass_search ( bohach2, m, x, delta_tol, delta, k_max )
 336   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 337   print ( '' )
 338   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 339 #
 340 #  Demonstrate correct minimizer.
 341 #
 342   x = np.array ( [ 0.0, 0.0 ] )
 343   r8vec_print ( m, x, '  Correct minimizer X*:' )
 344   print ( '' )
 345   f = bohach2 ( m, x )
 346   print ( '  F(X*) = %g' % ( f ) )
 347 #
 348 #  Terminate.
 349 #
 350   print ( '' )
 351   print ( 'BOHACH2_TEST' )
 352   print ( '  Normal end of execution.' )
 353   return
 354 
 355 def broyden ( m, x ):
 356 
 357 #*****************************************************************************80
 358 #
 359 ## BROYDEN computes the two dimensional modified Broyden function.
 360 #
 361 #  Discussion:
 362 #
 363 #    This function has a global minimizer:
 364 #
 365 #      X = ( -0.511547141775014, -0.398160951772036 )
 366 #
 367 #    for which
 368 #
 369 #      F(X) = 1.44E-04
 370 #
 371 #    Start the search at
 372 #
 373 #      X = ( -0.9, -1.0 )
 374 #
 375 #  Licensing:
 376 #
 377 #    This code is distributed under the GNU LGPL license.
 378 #
 379 #  Modified:
 380 #
 381 #    23 January 2016
 382 #
 383 #  Author:
 384 #
 385 #    John Burkardt
 386 #
 387 #  Reference:
 388 #
 389 #    Charles Broyden,
 390 #    A class of methods for solving nonlinear simultaneous equations,
 391 #    Mathematics of Computation,
 392 #    Volume 19, 1965, pages 577-593.
 393 #
 394 #    Jorge More, Burton Garbow, Kenneth Hillstrom,
 395 #    Testing unconstrained optimization software,
 396 #    ACM Transactions on Mathematical Software,
 397 #    Volume 7, Number 1, March 1981, pages 17-41. 
 398 #
 399 #  Parameters:
 400 #
 401 #    Input, integer M, the number of variables.
 402 #
 403 #    Input, real X(M), the argument of the function.
 404 #
 405 #    Output, real F, the value of the function at X.
 406 #
 407   f1 = abs ( ( 3.0 -       x[0] ) * x[0] - 2.0 * x[1] + 1.0 )
 408   f2 = abs ( ( 3.0 - 2.0 * x[1] ) * x[1] -       x[0] + 1.0 )
 409 
 410   p = 3.0 / 7.0
 411 
 412   f = f1 ** p + f2 ** p
 413  
 414   return f
 415 
 416 def broyden_test ( ):
 417 
 418 #*****************************************************************************80
 419 #
 420 ## BROYDEN_TEST works with the Broyden function.
 421 #
 422 #  Licensing:
 423 #
 424 #    This code is distributed under the GNU LGPL license.
 425 #
 426 #  Modified:
 427 #
 428 #    23 January 2016
 429 #
 430 #  Author:
 431 #
 432 #    John Burkardt
 433 #
 434   import numpy as np
 435   import platform
 436 
 437   print ( '' )
 438   print ( 'BROYDEN_TEST:' )
 439   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
 440   print ( '  Test COMPASS_SEARCH with the Broyden function.' )
 441   m = 2
 442   delta_tol = 0.00001
 443   delta = 0.3
 444   k_max = 20000
 445 
 446   x = np.array ( [ -0.9, -1.0 ] )
 447   r8vec_print ( m, x, '  Initial point X0:' )
 448   print ( '' )
 449   print ( '  F(X0) = %g' % ( broyden ( m, x ) ) )
 450 
 451   x, fx, k = compass_search ( broyden, m, x, delta_tol, delta, k_max )
 452   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 453   print ( '' )
 454   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 455 #
 456 #  Demonstrate correct minimizer.
 457 #
 458   x = np.array ( [ -0.511547141775014, -0.398160951772036 ] )
 459   r8vec_print ( m, x, '  Correct minimizer X*:' )
 460   print ( '' )
 461   print ( '  F(X*) = %g' % ( broyden ( m, x ) ) )
 462 #
 463 #  Terminate.
 464 #
 465   print ( '' )
 466   print ( 'BROYDEN_TEST' )
 467   print ( '  Normal end of execution.' )
 468   return
 469 
 470 def compass_search ( function_handle, m, x, delta_tol, delta, k_max ):
 471 
 472 #*****************************************************************************80
 473 #
 474 ## COMPASS_SEARCH carries out a direct search minimization algorithm.
 475 #
 476 #  Licensing:
 477 #
 478 #    This code is distributed under the GNU LGPL license.
 479 #
 480 #  Modified:
 481 #
 482 #    23 January 2015
 483 #
 484 #  Author:
 485 #
 486 #    John Burkardt
 487 #
 488 #  Reference:
 489 #
 490 #    Tamara Kolda, Robert Michael Lewis, Virginia Torczon,
 491 #    Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods,
 492 #    SIAM Review,
 493 #    Volume 45, Number 3, 2003, pages 385-482. 
 494 #
 495 #  Parameters:
 496 #
 497 #    Input, function handle FUNCTION_HANDLE ( M, X ), a handle for the function
 498 #    to be minimized.
 499 #
 500 #    Input, integer M, the number of variables.
 501 #
 502 #    Input, real X(M), a starting estimate for the minimizer.
 503 #
 504 #    Input, real DELTA_TOL, the smallest step size that is allowed.
 505 #
 506 #    Input, real DELTA, the starting stepsize.  
 507 #
 508 #    Input, integer K_MAX, the maximum number of steps allowed.
 509 #
 510 #    Output, real X(M), the estimated minimizer.
 511 #
 512 #    Output, real FX, the function value at X.
 513 #
 514 #    Output, integer K, the number of steps taken.
 515 #
 516   import numpy as np
 517   from sys import exit
 518 
 519   k = 0
 520   fx = function_handle ( m, x )
 521 
 522   if ( delta_tol <= 0 ):
 523     print ( '' )
 524     print ( 'COMPASS_SEARCH - Fatal error!' )
 525     print ( '  DELTA_TOL <= 0.0.' )
 526     exit ( 'COMPASS_SEARCH - Fatal error!' )
 527 
 528   if ( delta <= delta_tol ):
 529     print ( '' )
 530     print ( 'COMPASS_SEARCH - Fatal error!' )
 531     print ( '  DELTA < DELTA_TOL.' )
 532     exit ( 'COMPASS_SEARCH - Fatal error!' )
 533 
 534   while ( k < k_max ):
 535 
 536     k = k + 1
 537 #
 538 #  For each coordinate direction I, seek a lower function value
 539 #  by increasing or decreasing X(I) by DELTA.
 540 #
 541     decrease = False
 542     s = + 1.0
 543     i = 0
 544 
 545     for ii in range ( 0, 2 * m ):
 546 
 547       xd = x.copy ( )
 548       xd[i] = xd[i] + s * delta
 549       fxd = function_handle ( m, xd )
 550 #
 551 #  As soon as a decrease is noticed, accept the new point.
 552 #
 553       if ( fxd < fx ):
 554         x = xd.copy ( )
 555         fx = fxd
 556         decrease = True
 557         break
 558 
 559       s = - s
 560       if ( s == + 1.0 ):
 561         i = i + 1
 562 #
 563 #  If no decrease occurred, reduce DELTA.
 564 #
 565     if ( not decrease ):
 566       delta = delta / 2.0
 567       if ( delta < delta_tol ):
 568         break
 569 
 570   return x, fx, k
 571 
 572 def compass_search_test ( ):
 573 
 574 #*****************************************************************************80
 575 #
 576 ## COMPASS_SEARCH_TEST tests the COMPASS_SEARCH library.
 577 #
 578 #  Licensing:
 579 #
 580 #    This code is distributed under the GNU LGPL license.
 581 #
 582 #  Modified:
 583 #
 584 #    23 January 2016
 585 #
 586 #  Author:
 587 #
 588 #    John Burkardt
 589 #
 590   import platform
 591 
 592   print ( '' )
 593   print ( 'COMPASS_SEARCH_TEST' )
 594   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
 595   print ( '  Test the COMPASS_SEARCH library.' )
 596 
 597   beale_test ( )
 598   bohach1_test ( )
 599   bohach2_test ( )
 600   broyden_test ( )
 601   extended_rosenbrock_test ( )
 602   goldstein_price_test ( )
 603   himmelblau_test ( )
 604   local_test ( )
 605   mckinnon_test ( )
 606   powell_test ( )
 607   rosenbrock_test ( )
 608 #
 609 #  Terminate.
 610 #
 611   print ( '' )
 612   print ( 'COMPASS_SEARCH_TEST:' )
 613   print ( '  Normal end of execution.' )
 614   return
 615 
 616 def extended_rosenbrock ( m, x ):
 617 
 618 #*****************************************************************************80
 619 #
 620 ## EXTENDED_ROSENBROCK computes the extended Rosenbrock function.
 621 #
 622 #  Discussion:
 623 #
 624 #    The number of dimensions is arbitrary, except that it must be even.
 625 #
 626 #    There is a global minimum at X* = (1,1,...), F(X*) = 0.
 627 #
 628 #    The contours are sharply twisted.
 629 #
 630 #  Licensing:
 631 #
 632 #    This code is distributed under the GNU LGPL license.
 633 #
 634 #  Modified:
 635 #
 636 #    23 January 2016
 637 #
 638 #  Author:
 639 #
 640 #    John Burkardt
 641 #
 642 #  Reference:
 643 #
 644 #    Howard Rosenbrock,
 645 #    An Automatic Method for Finding the Greatest or Least Value of a Function,
 646 #    Computer Journal,
 647 #    Volume 3, 1960, pages 175-184.
 648 #
 649 #  Parameters:
 650 #
 651 #    Input, integer M, the number of variables.
 652 #
 653 #    Input, real X(M), the argument of the function.
 654 #
 655 #    Output, real F, the value of the function at X.
 656 #
 657   import numpy as np
 658 
 659   r = np.zeros ( m )
 660 
 661   for i in range ( 0, m, 2 ):
 662     r[i] = 1.0 - x[i]
 663     r[i+1] = 10.0 * ( x[i+1] - x[i] ** 2 )
 664 
 665   f = np.dot ( r, r )
 666 
 667   return f
 668 
 669 def extended_rosenbrock_test ( ):
 670 
 671 #*****************************************************************************80
 672 #
 673 ## EXTENDED_ROSENBROCK_TEST works with the extended Rosenbrock function.
 674 #
 675 #  Licensing:
 676 #
 677 #    This code is distributed under the GNU LGPL license.
 678 #
 679 #  Modified:
 680 #
 681 #    23 January 2016
 682 #
 683 #  Author:
 684 #
 685 #    John Burkardt
 686 #
 687   import numpy as np
 688   import platform
 689 
 690   print ( '' )
 691   print ( 'EXTENDED_ROSENBROCK_TEST:' )
 692   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
 693   print ( '  Test COMPASS_SEARCH with the extended Rosenbrock function.' )
 694   m = 4
 695   delta_tol = 0.00001
 696   delta = 0.3
 697   k_max = 20000
 698 
 699   x = np.array ( [ - 1.2, 1.0,  -1.5, 1.2 ] )
 700   r8vec_print ( m, x, '  Initial point X0:' )
 701   print ( '' )
 702   print ( '  F(X0) = %g' % ( extended_rosenbrock ( m, x ) ) )
 703 
 704   x, fx, k = compass_search ( extended_rosenbrock, m, x, delta_tol, delta, k_max )
 705   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 706   print ( '' )
 707   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 708 #
 709 #  Demonstrate correct minimizer.
 710 #
 711   x = np.array ( [ 1.0, 1.0, 1.0, 1.0 ] )
 712   r8vec_print ( m, x, '  Correct minimizer X*:' )
 713   print ( '' )
 714   print ( '  F(X*) = %g' % ( extended_rosenbrock ( m, x ) ) )
 715 #
 716 #  Terminate.
 717 #
 718   print ( '' )
 719   print ( 'EXTENDED_ROSENBROCK_TEST' )
 720   print ( '  Normal end of execution.' )
 721   return
 722 
 723 def goldstein_price ( m, x ):
 724 
 725 #*****************************************************************************80
 726 #
 727 ## GOLDSTEIN_PRICE evaluates the Goldstein-Price polynomial.
 728 #
 729 #  Discussion:
 730 #
 731 #    The minimizer is
 732 #
 733 #      X* = [ 0.0, -1.0 ]
 734 #      F(X*) = 3.0
 735 #
 736 #    Suggested starting point:
 737 #
 738 #      X = [ -0.5, 0.25 ] (easy convergence)
 739 #      X = [ -4.0, 5.00 ] (harder convergence)
 740 #
 741 #  Licensing:
 742 #
 743 #    This code is distributed under the GNU LGPL license.
 744 #
 745 #  Modified:
 746 #
 747 #    23 January 2016
 748 #
 749 #  Author:
 750 #
 751 #    John Burkardt
 752 #
 753 #  Reference:
 754 #
 755 #    Zbigniew Michalewicz,
 756 #    Genetic Algorithms + Data Structures = Evolution Programs,
 757 #    Third Edition,
 758 #    Springer Verlag, 1996,
 759 #    ISBN: 3-540-60676-9,
 760 #    LC: QA76.618.M53.
 761 #
 762 #  Parameters:
 763 #
 764 #    Input, integer M, the number of variables.
 765 #
 766 #    Input, real X(M), the argument of the function.
 767 #
 768 #    Output, real F, the value of the function at X.
 769 #
 770   a = x[0] + x[1] + 1.0
 771 
 772   b = 19.0 - 14.0 * x[0] + 3.0 * x[0] * x[0] - 14.0 * x[1] \
 773     + 6.0 * x[0] * x[1] + 3.0 * x[1] * x[1]
 774 
 775   c = 2.0 * x[0] - 3.0 * x[1]
 776 
 777   d = 18.0 - 32.0 * x[0] + 12.0 * x[0] * x[0] + 48.0 * x[1] \
 778     - 36.0 * x[0] * x[1] + 27.0 * x[1] * x[1]
 779 
 780   f = ( 1.0 + a * a * b ) * ( 30.0 + c * c * d )
 781 
 782   return f
 783 
 784 def goldstein_price_test ( ):
 785 
 786 #*****************************************************************************80
 787 #
 788 ## GOLDSTEIN_PRICE_TEST works with the Goldstein-Price function.
 789 #
 790 #  Licensing:
 791 #
 792 #    This code is distributed under the GNU LGPL license.
 793 #
 794 #  Modified:
 795 #
 796 #    23 January 2016
 797 #
 798 #  Author:
 799 #
 800 #    John Burkardt
 801 #
 802   import numpy as np
 803   import platform
 804 
 805   print ( '' )
 806   print ( 'GOLDSTEIN_PRICE_TEST:' )
 807   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
 808   print ( '  Test COMPASS_SEARCH with the Goldstein-Price function.' )
 809   m = 2
 810   delta_tol = 0.00001
 811   delta = 0.3
 812   k_max = 20000
 813 
 814   x = np.array ( [ -0.5, 0.25 ] )
 815   r8vec_print ( m, x, '  Initial point X0:' )
 816   print ( '' )
 817   print ( '  F(X0) = %g' % ( goldstein_price ( m, x ) ) )
 818 
 819   x, fx, k = compass_search ( goldstein_price, m, x, delta_tol, delta, k_max )
 820   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 821   print ( '' )
 822   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 823 #
 824 #  Repeat with more difficult start.
 825 #
 826   x = np.array ( [ -4.0, 5.0 ] )
 827   r8vec_print ( m, x, '  Initial point X0:' )
 828   print ( '' )
 829   print ( '  F(X0) = %g' % ( goldstein_price ( m, x ) ) )
 830 
 831   x, fx, k = compass_search ( goldstein_price, m, x, delta_tol, delta, k_max )
 832   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 833   print ( '' )
 834   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 835 #
 836 #  Demonstrate correct minimizer.
 837 #
 838   x = np.array ( [ 0.0, -1.0 ] )
 839   r8vec_print ( m, x, '  Correct minimizer X*:' )
 840   print ( '' )
 841   print ( '  F(X*) = %g' % ( goldstein_price ( m, x ) ) )
 842 #
 843 #  Terminate.
 844 #
 845   print ( '' )
 846   print ( 'GOLDSTEIN_PRICE_TEST' )
 847   print ( '  Normal end of execution.' )
 848   return
 849 
 850 def himmelblau ( m, x ):
 851 
 852 #*****************************************************************************80
 853 #
 854 ## HIMMELBLAU computes the Himmelblau function.
 855 #
 856 #  Discussion:
 857 #
 858 #    This function has 4 global minima:
 859 #
 860 #      X* = (  3,        2       ), F(X*) = 0.
 861 #      X* = (  3.58439, -1.84813 ), F(X*) = 0.
 862 #      X* = ( -3.77934, -3.28317 ), F(X*) = 0.
 863 #      X* = ( -2.80512,  3.13134 ), F(X*) = 0.
 864 #
 865 #    Suggested starting points are
 866 #
 867 #      (+1,+1),
 868 #      (-1,+1),
 869 #      (-1,-1),
 870 #      (+1,-1),
 871 #
 872 #  Licensing:
 873 #
 874 #    This code is distributed under the GNU LGPL license.
 875 #
 876 #  Modified:
 877 #
 878 #    23 January 2016
 879 #
 880 #  Author:
 881 #
 882 #    John Burkardt
 883 #
 884 #  Reference:
 885 #
 886 #    David Himmelblau,
 887 #    Applied Nonlinear Programming,
 888 #    McGraw Hill, 1972,
 889 #    ISBN13: 978-0070289215,
 890 #    LC: T57.8.H55.
 891 #
 892 #  Parameters:
 893 #
 894 #    Input, integer M, the number of variables.
 895 #
 896 #    Input, real X(M), the argument of the function.
 897 #
 898 #    Output, real F, the value of the function at X.
 899 #
 900   f = ( x[0] ** 2 + x[1]      - 11.0 ) ** 2 \
 901     + ( x[0]      + x[1] ** 2 -  7.0 ) ** 2
 902  
 903   return f
 904 
 905 def himmelblau_test ( ):
 906 
 907 #*****************************************************************************80
 908 #
 909 ## HIMMELBLAU_TEST works with the Himmelblau function.
 910 #
 911 #  Licensing:
 912 #
 913 #    This code is distributed under the GNU LGPL license.
 914 #
 915 #  Modified:
 916 #
 917 #    23 January 2016
 918 #
 919 #  Author:
 920 #
 921 #    John Burkardt
 922 #
 923   import numpy as np
 924   import platform
 925 
 926   print ( '' )
 927   print ( 'HIMMELBLAU_TEST:' )
 928   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
 929   print ( '  Test COMPASS_SEARCH with the Himmelblau function.' )
 930   m = 2
 931   delta_tol = 0.00001
 932   delta = 0.3
 933   k_max = 20000
 934 
 935   x = np.array ( [ 1.0, 1.0 ] )
 936   r8vec_print ( m, x, '  Initial point X0:' )
 937   print ( '' )
 938   print ( '  F(X0) = %g' % ( himmelblau ( m, x ) ) )
 939 
 940   x, fx, k = compass_search ( himmelblau, m, x, delta_tol, delta, k_max )
 941   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 942   print ( '' )
 943   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 944 
 945   x = np.array ( [ -1.0, 1.0 ] )
 946   r8vec_print ( m, x, '  Initial point X0:' )
 947   print ( '' )
 948   print ( '  F(X0) = %g' % ( himmelblau ( m, x ) ) )
 949 
 950   x, fx, k = compass_search ( himmelblau, m, x, delta_tol, delta, k_max )
 951   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 952   print ( '' )
 953   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 954 
 955   x = np.array ( [ -1.0, -1.0 ] )
 956   r8vec_print ( m, x, '  Initial point X0:' )
 957   print ( '' )
 958   print ( '  F(X0) = %g' % ( himmelblau ( m, x ) ) )
 959 
 960   x, fx, k = compass_search ( himmelblau, m, x, delta_tol, delta, k_max )
 961   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 962   print ( '' )
 963   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 964 
 965   x = np.array ( [ 1.0, -1.0 ] )
 966   r8vec_print ( m, x, '  Initial point X0:' )
 967   print ( '' )
 968   print ( '  F(X0) = %g' % ( himmelblau ( m, x ) ) )
 969 
 970   x, fx, k = compass_search ( himmelblau, m, x, delta_tol, delta, k_max )
 971   r8vec_print ( m, x, '  Estimated minimizer X1:' )
 972   print ( '' )
 973   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
 974 #
 975 #  Demonstrate Himmelblau minimizers.
 976 #
 977   x = np.array ( [ 3.0, 2.0 ] )
 978   r8vec_print ( m, x, '  Correct Himmelblau minimizer X1*:' )
 979   print ( '' )
 980   print ( '  F(X*) = %g' % ( himmelblau ( m, x ) ) )
 981 
 982   x = np.array ( [ 3.58439, -1.84813 ] )
 983   r8vec_print ( m, x, '  Correct Himmelblau minimizer X2*:' )
 984   print ( '' )
 985   print ( '  F(X*) = %g' % ( himmelblau ( m, x ) ) )
 986 
 987   x = np.array ( [ -3.77934, -3.28317 ] )
 988   r8vec_print ( m, x, '  Correct Himmelblau minimizer X3*:' )
 989   print ( '' )
 990   print ( '  F(X*) = %g' % ( himmelblau ( m, x ) ) )
 991 
 992   x = np.array ( [ -2.80512,  3.13134 ] )
 993   r8vec_print ( m, x, '  Correct Himmelblau minimizer X4*:' )
 994   print ( '' )
 995   print ( '  F(X*) = %g' % ( himmelblau ( m, x ) ) )
 996 #
 997 #  Terminate.
 998 #
 999   print ( '' )
1000   print ( 'HIMMELBLAU_TEST' )
1001   print ( '  Normal end of execution.' )
1002   return
1003 
1004 def local ( m, x ):
1005 
1006 #*****************************************************************************80
1007 #
1008 ## LOCAL computes the local function.
1009 #
1010 #  Discussion:
1011 #
1012 #    This function has a local minimizer:
1013 #
1014 #      X* = ( 0.2858054412..., 0.2793263206...), F(X*) = 5.9225...
1015 #
1016 #    and a global minimizer:
1017 #
1018 #      X* = ( -21.02667179..., -36.75997872...), F(X*) = 0.
1019 #
1020 #    Suggested starting point for local minimizer:
1021 #
1022 #      X = ( 1, 1 ), F(X) = 3.33 * 10^6.
1023 #
1024 #    Suggested starting point for global minimizer:
1025 #
1026 #      X = ( -15, -35), F(X) = 1.49 * 10^8.
1027 #
1028 #  Licensing:
1029 #
1030 #    This code is distributed under the GNU LGPL license.
1031 #
1032 #  Modified:
1033 #
1034 #    23 January 2016
1035 #
1036 #  Author:
1037 #
1038 #    John Burkardt
1039 #
1040 #  Reference:
1041 #
1042 #    David Himmelblau,
1043 #    Applied Nonlinear Programming,
1044 #    McGraw Hill, 1972,
1045 #    ISBN13: 978-0070289215,
1046 #    LC: T57.8.H55.
1047 #
1048 #  Parameters:
1049 #
1050 #    Input, integer M, the number of variables.
1051 #
1052 #    Input, real X(M), the argument of the function.
1053 #
1054 #    Output, real F, the value of the function at X.
1055 #
1056   f = ( x[0] ** 2 + 12.0 * x[1] - 1.0 ) ** 2 \
1057     + ( 49.0 * x[0] ** 2 + 49.0 * x[1] ** 2 + 84.0 * x[0] \
1058     + 2324.0 * x[1] - 681.0 ) ** 2
1059  
1060   return f
1061 
1062 def local_test ( ):
1063 
1064 #*****************************************************************************80
1065 #
1066 ## LOCAL_TEST works with the Local function.
1067 #
1068 #  Licensing:
1069 #
1070 #    This code is distributed under the GNU LGPL license.
1071 #
1072 #  Modified:
1073 #
1074 #    23 January 2016
1075 #
1076 #  Author:
1077 #
1078 #    John Burkardt
1079 #
1080   import numpy as np
1081   import platform
1082 
1083   print ( '' )
1084   print ( 'LOCAL_TEST:' )
1085   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
1086   print ( '  Test COMPASS_SEARCH with the Local function.' )
1087   m = 2
1088   delta_tol = 0.00001
1089   delta = 0.3
1090   k_max = 20000
1091 
1092   x = np.array ( [ 1.0, 1.0 ] )
1093   r8vec_print ( m, x, '  Initial point X0:' )
1094   print ( '' )
1095   print ( '  F(X0) = %g' % ( local ( m, x ) ) )
1096 
1097   x, fx, k = compass_search ( local, m, x, delta_tol, delta, k_max )
1098   r8vec_print ( m, x, '  Estimated minimizer X1:' )
1099   print ( '' )
1100   print ( '  F(X1) = %g, number of steps = %d' % (  fx, k ) )
1101 #
1102 #  Demonstrate local minimizer.
1103 #
1104   x = np.array ( [ 0.2858054412, 0.2793263206 ] )
1105   r8vec_print ( m, x, '  Correct local minimizer X*:' )
1106   print ( '' )
1107   print ( '  F(X*) = %g' % ( local ( m, x ) ) )
1108 #
1109 #  Try for global minimizer.
1110 #
1111   x = np.array ( [ -15.0, -35.0 ] )
1112   r8vec_print ( m, x, '  Initial point X0:' )
1113   print ( '' )
1114   print ( '  F(X0) = %g' % ( local ( m, x ) ) )
1115 
1116   x, fx, k = compass_search ( local, m, x, delta_tol, delta, k_max )
1117   r8vec_print ( m, x, '  Estimated minimizer X1:' )
1118   print ( '' )
1119   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
1120 #
1121 #  Demonstrate global minimizer.
1122 #
1123   x = np.array ( [ -21.02667179, -36.75997872 ] )
1124   r8vec_print ( m, x, '  Correct global minimizer X*:' )
1125   print ( '' )
1126   print ( '  F(X*) = %g' % ( local ( m, x ) ) )
1127 #
1128 #  Terminate.
1129 #
1130   print ( '' )
1131   print ( 'LOCAL_TEST' )
1132   print ( '  Normal end of execution.' )
1133   return
1134 
1135 def mckinnon ( m, x ):
1136 
1137 #*****************************************************************************80
1138 #
1139 ## MCKINNON computes the McKinnon function.
1140 #
1141 #  Discussion:
1142 #
1143 #    This function has a global minimizer:
1144 #
1145 #      X* = ( 0.0, -0.5 ), F(X*) = -0.25.
1146 #
1147 #    There are three parameters, TAU, THETA and PHI.
1148 #
1149 #    1 < TAU, then F is strictly convex.
1150 #             and F has continuous first derivatives.
1151 #    2 < TAU, then F has continuous second derivatives.
1152 #    3 < TAU, then F has continuous third derivatives.
1153 #
1154 #    However, this function can cause the Nelder-Mead optimization
1155 #    algorithm to "converge" to a point which is not the minimizer
1156 #    of the function F.
1157 #
1158 #    Sample parameter values which cause problems for Nelder-Mead 
1159 #    include:
1160 #
1161 #      PHI = 10,  TAU = 1, THETA = 15
1162 #      PHI = 60,  TAU = 2, THETA =  6
1163 #      PHI = 400, TAU = 3, THETA =  6
1164 #
1165 #    To get the bad behavior, we also assume the initial simplex has the form
1166 #
1167 #      X1 = (0,0),
1168 #      X2 = (1,1),
1169 #      X3 = (A,B), 
1170 #
1171 #    where 
1172 #
1173 #      A = (1+sqrt(33))/8 =  0.84307...
1174 #      B = (1-sqrt(33))/8 = -0.59307...
1175 #
1176 #  Licensing:
1177 #
1178 #    This code is distributed under the GNU LGPL license.
1179 #
1180 #  Modified:
1181 #
1182 #    23 January 2016
1183 #
1184 #  Author:
1185 #
1186 #    John Burkardt
1187 #
1188 #  Reference:
1189 #
1190 #    Ken McKinnon,
1191 #    Convergence of the Nelder-Mead simplex method to a nonstationary point,
1192 #    SIAM Journal on Optimization,
1193 #    Volume 9, Number 1, 1998, pages 148-158.
1194 #
1195 #  Parameters:
1196 #
1197 #    Input, integer M, the number of variables.
1198 #
1199 #    Input, real X(M), the argument of the function.
1200 #
1201 #    Output, real F, the value of the function at X.
1202 #
1203   global phi
1204   global tau
1205   global theta
1206 
1207   if ( x[0] <= 0.0 ):
1208     f = theta * phi * abs ( x[0] ) ** tau + x[1] * ( 1.0 + x[1] )
1209   else:
1210     f = theta       *       x[0] ** tau   + x[1] * ( 1.0 + x[1] )
1211 
1212   return f
1213 
1214 def mckinnon_test ( ):
1215 
1216 #*****************************************************************************80
1217 #
1218 ## MCKINNON_TEST works with the McKinnon function.
1219 #
1220 #  Licensing:
1221 #
1222 #    This code is distributed under the GNU LGPL license.
1223 #
1224 #  Modified:
1225 #
1226 #    23 January 2016
1227 #
1228 #  Author:
1229 #
1230 #    John Burkardt
1231 #
1232   import numpy as np
1233   import platform
1234 
1235   global phi
1236   global tau
1237   global theta
1238 
1239   print ( '' )
1240   print ( 'MCKINNON_TEST:' )
1241   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
1242   print ( '  Test COMPASS_SEARCH with the McKinnon function.' )
1243   m = 2
1244   delta_tol = 0.00001
1245   delta = 0.3
1246   k_max = 20000
1247 #
1248 #  Test 1
1249 #
1250   a = ( 1.0 + np.sqrt ( 33.0 ) ) / 8.0
1251   b = ( 1.0 - np.sqrt ( 33.0 ) ) / 8.0
1252 
1253   phi = 10.0
1254   tau = 1.0
1255   theta = 15.0
1256 
1257   x = np.array ( [ a, b ] )
1258   r8vec_print ( m, x, '  Initial point X0:' )
1259   print ( '  PHI = %g, TAU = %g, THETA = %g' % ( phi, tau, theta ) )
1260   print ( '' )
1261   print ( '  F(X0) = %g' % ( mckinnon ( m, x ) ) )
1262 
1263   x, fx, k = compass_search ( mckinnon, m, x, delta_tol, delta, k_max )
1264   r8vec_print ( m, x, '  Estimated minimizer X1:' )
1265   print ( '' )
1266   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
1267 
1268   x = np.array ( [ 0.0, -0.5 ] )
1269   r8vec_print ( m, x, '  Correct minimizer X*:' )
1270   print ( '' )
1271   print ( '  F(X*) = %g' % ( mckinnon ( m, x ) ) )
1272 #
1273 #  Test 2
1274 #
1275   a = ( 1.0 + np.sqrt ( 33.0 ) ) / 8.0
1276   b = ( 1.0 - np.sqrt ( 33.0 ) ) / 8.0
1277 
1278   phi = 60.0
1279   tau = 2.0
1280   theta = 6.0
1281 
1282   x = np.array ( [ a, b ] )
1283   r8vec_print ( m, x, '  Initial point X0:' )
1284   print ( '  PHI = %g, TAU = %g, THETA = %g' % ( phi, tau, theta ) )
1285   print ( '' )
1286   print ( '  F(X0) = %g' % ( mckinnon ( m, x ) ) )
1287 
1288   x, fx, k = compass_search ( mckinnon, m, x, delta_tol, delta, k_max )
1289   r8vec_print ( m, x, '  Estimated minimizer X1:' )
1290   print ( '' )
1291   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
1292 
1293   x = np.array ( [ 0.0, -0.5 ] )
1294   r8vec_print ( m, x, '  Correct minimizer X*:' )
1295   print ( '' )
1296   print ( '  F(X*) = %g' % ( mckinnon ( m, x ) ) )
1297 #
1298 #  Test 3
1299 #
1300   a = ( 1.0 + np.sqrt ( 33.0 ) ) / 8.0
1301   b = ( 1.0 - np.sqrt ( 33.0 ) ) / 8.0
1302 
1303   phi = 4000.0
1304   tau = 3.0
1305   theta = 6.0
1306 
1307   x = np.array ( [ a, b ] )
1308   r8vec_print ( m, x, '  Initial point X0:' )
1309   print ( '  PHI = %g, TAU = %g, THETA = %g' % ( phi, tau, theta ) )
1310   print ( '' )
1311   print ( '  F(X0) = %g' % ( mckinnon ( m, x ) ) )
1312 
1313   x, fx, k = compass_search ( mckinnon, m, x, delta_tol, delta, k_max )
1314   r8vec_print ( m, x, '  Estimated minimizer X1:' )
1315   print ( '' )
1316   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
1317 
1318   x = np.array ( [ 0.0, -0.5 ] )
1319   r8vec_print ( m, x, '  Correct minimizer X*:' )
1320   print ( '' )
1321   print ( '  F(X*) = %g' % ( mckinnon ( m, x ) ) )
1322 #
1323 #  Terminate.
1324 #
1325   print ( '' )
1326   print ( 'MCKINNON_TEST' )
1327   print ( '  Normal end of execution.' )
1328   return
1329 
1330 def powell ( m, x ):
1331 
1332 #*****************************************************************************80
1333 #
1334 ## POWELL computes the Powell singular quartic function.
1335 #
1336 #  Discussion:
1337 #
1338 #    This function has a global minimizer:
1339 #
1340 #      X* = ( 0.0, 0.0, 0.0, 0.0 ), F(X*) = 0.
1341 #
1342 #    Start the search at
1343 #
1344 #      X = ( 3.0, -1.0, 0.0, 1.0 )
1345 #
1346 #  Licensing:
1347 #
1348 #    This code is distributed under the GNU LGPL license.
1349 #
1350 #  Modified:
1351 #
1352 #    23 January 2016
1353 #
1354 #  Author:
1355 #
1356 #    John Burkardt
1357 #
1358 #  Reference:
1359 #
1360 #    Michael Powell,
1361 #    An Iterative Method for Finding Stationary Values of a Function
1362 #    of Several Variables,
1363 #    Computer Journal,
1364 #    Volume 5, 1962, pages 147-151.
1365 #
1366 #  Parameters:
1367 #
1368 #    Input, integer M, the number of variables.
1369 #
1370 #    Input, real X(M), the argument of the function.
1371 #
1372 #    Output, real F, the value of the function at X.
1373 #
1374   f1 = x[0] + 10.0 * x[1]
1375   f2 =                            x[2] - x[3]
1376   f3 =               x[1] - 2.0 * x[2]
1377   f4 = x[0]                            - x[3]
1378 
1379   f = f1 * f1 + f2 * f2 + f3 * f3 + f4 * f4
1380  
1381   return f
1382 
1383 def powell_test ( ):
1384 
1385 #*****************************************************************************80
1386 #
1387 ## POWELL_TEST works with the Powell function.
1388 #
1389 #  Licensing:
1390 #
1391 #    This code is distributed under the GNU LGPL license.
1392 #
1393 #  Modified:
1394 #
1395 #    23 January 2016
1396 #
1397 #  Author:
1398 #
1399 #    John Burkardt
1400 #
1401   import numpy as np
1402   import platform
1403 
1404   print ( '' )
1405   print ( 'POWELL_TEST:' )
1406   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
1407   print ( '  Test COMPASS_SEARCH with the Powell function.' )
1408   m = 4
1409   delta_tol = 0.00001
1410   delta = 0.3
1411   k_max = 20000
1412 
1413   x = np.array ( [ 3.0, -1.0, 0.0, 1.0 ] )
1414   r8vec_print ( m, x, '  Initial point X0:' )
1415   print ( '' )
1416   print ( '  F(X0) = %g' % ( powell ( m, x ) ) )
1417 
1418   x, fx, k = compass_search ( powell, m, x, delta_tol, delta, k_max )
1419   r8vec_print ( m, x, '  Estimated minimizer X1:' )
1420   print ( '' )
1421   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
1422 #
1423 #  Demonstrate correct minimizer.
1424 #
1425   x = np.array ( [ 0.0, 0.0, 0.0, 0.0 ] )
1426   r8vec_print ( m, x, '  Correct minimizer X*:' )
1427   print ( '' )
1428   print ( '  F(X*) = %g' % ( powell ( m, x ) ) )
1429 #
1430 #  Terminate.
1431 #
1432   print ( '' )
1433   print ( 'POWELL_TEST' )
1434   print ( '  Normal end of execution.' )
1435   return
1436 
1437 def r8vec_print ( n, a, title ):
1438 
1439 #*****************************************************************************80
1440 #
1441 ## R8VEC_PRINT prints an R8VEC.
1442 #
1443 #  Licensing:
1444 #
1445 #    This code is distributed under the GNU LGPL license.
1446 #
1447 #  Modified:
1448 #
1449 #    31 August 2014
1450 #
1451 #  Author:
1452 #
1453 #    John Burkardt
1454 #
1455 #  Parameters:
1456 #
1457 #    Input, integer N, the dimension of the vector.
1458 #
1459 #    Input, real A(N), the vector to be printed.
1460 #
1461 #    Input, string TITLE, a title.
1462 #
1463   print ( '' )
1464   print ( title )
1465   print ( '' )
1466   for i in range ( 0, n ):
1467     print ( '%6d:  %12g' % ( i, a[i] ) )
1468 
1469 def r8vec_print_test ( ):
1470 
1471 #*****************************************************************************80
1472 #
1473 ## R8VEC_PRINT_TEST tests R8VEC_PRINT.
1474 #
1475 #  Licensing:
1476 #
1477 #    This code is distributed under the GNU LGPL license.
1478 #
1479 #  Modified:
1480 #
1481 #    29 October 2014
1482 #
1483 #  Author:
1484 #
1485 #    John Burkardt
1486 #
1487   import numpy as np
1488   import platform
1489 
1490   print ( '' )
1491   print ( 'R8VEC_PRINT_TEST' )
1492   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
1493   print ( '  R8VEC_PRINT prints an R8VEC.' )
1494 
1495   n = 4
1496   v = np.array ( [ 123.456, 0.000005, -1.0E+06, 3.14159265 ], dtype = np.float64 )
1497   r8vec_print ( n, v, '  Here is an R8VEC:' )
1498 #
1499 #  Terminate.
1500 #
1501   print ( '' )
1502   print ( 'R8VEC_PRINT_TEST:' )
1503   print ( '  Normal end of execution.' )
1504   return
1505 
1506 def rosenbrock ( m, x ):
1507 
1508 #*****************************************************************************80
1509 #
1510 ## ROSENBROCK computes the Rosenbrock function.
1511 #
1512 #  Discussion:
1513 #
1514 #    There is a global minimum at X* = (1,1), F(X*) = 0.
1515 #
1516 #    The starting point X = [ -1.2, 1.0 ] is suggested.
1517 #
1518 #    The contours are sharply twisted.
1519 #
1520 #  Licensing:
1521 #
1522 #    This code is distributed under the GNU LGPL license.
1523 #
1524 #  Modified:
1525 #
1526 #    23 January 2016
1527 #
1528 #  Author:
1529 #
1530 #    John Burkardt
1531 #
1532 #  Reference:
1533 #
1534 #    Howard Rosenbrock,
1535 #    An Automatic Method for Finding the Greatest or Least Value of a Function,
1536 #    Computer Journal,
1537 #    Volume 3, 1960, pages 175-184.
1538 #
1539 #  Parameters:
1540 #
1541 #    Input, integer M, the number of variables.
1542 #
1543 #    Input, real X(M), the argument of the function.
1544 #
1545 #    Output, real F, the value of the function at X.
1546 #
1547   f =         ( 1.0 - x[0] ) ** 2 + \
1548       100.0 * ( x[1] - x[0] * x[0] ) ** 2
1549 
1550   return f
1551 
1552 def rosenbrock_test ( ):
1553 
1554 #*****************************************************************************80
1555 #
1556 ## ROSENBROCK_TEST works with the Rosenbrock function.
1557 #
1558 #  Licensing:
1559 #
1560 #    This code is distributed under the GNU LGPL license.
1561 #
1562 #  Modified:
1563 #
1564 #    23 January 2016
1565 #
1566 #  Author:
1567 #
1568 #    John Burkardt
1569 #
1570   import numpy as np
1571   import platform
1572 
1573   print ( '' )
1574   print ( 'ROSENBROCK_TEST:' )
1575   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
1576   print ( '  Test COMPASS_SEARCH with the Rosenbrock function.' )
1577   m = 2
1578   delta_tol = 0.00001
1579   delta = 0.3
1580   k_max = 20000
1581 
1582   x = np.array ( [ - 1.2, 1.0 ] )
1583   r8vec_print ( m, x, '  Initial point X0:' )
1584   print ( '' )
1585   print ( '  F(X0) = %g' % ( rosenbrock ( m, x ) ) )
1586 
1587   x, fx, k = compass_search ( rosenbrock, m, x, delta_tol, delta, k_max )
1588   r8vec_print ( m, x, '  Estimated minimizer X1:' )
1589   print ( '' )
1590   print ( '  F(X1) = %g, number of steps = %d' % ( fx, k ) )
1591 #
1592 #  Demonstrate correct minimizer.
1593 #
1594   x = np.array ( [ 1.0, 1.0 ] )
1595   r8vec_print ( m, x, '  Correct minimizer X*:' )
1596   print ( '' )
1597   print ( '  F(X*) = %g' % ( rosenbrock ( m, x ) ) )
1598 #
1599 #  Terminate.
1600 #
1601   print ( '' )
1602   print ( 'ROSENBROCK_TEST' )
1603   print ( '  Normal end of execution.' )
1604   return
1605 
1606 def timestamp ( ):
1607 
1608 #*****************************************************************************80
1609 #
1610 ## TIMESTAMP prints the date as a timestamp.
1611 #
1612 #  Licensing:
1613 #
1614 #    This code is distributed under the GNU LGPL license. 
1615 #
1616 #  Modified:
1617 #
1618 #    06 April 2013
1619 #
1620 #  Author:
1621 #
1622 #    John Burkardt
1623 #
1624 #  Parameters:
1625 #
1626 #    None
1627 #
1628   import time
1629 
1630   t = time.time ( )
1631   print ( time.ctime ( t ) )
1632 
1633   return None
1634 
1635 def timestamp_test ( ):
1636 
1637 #*****************************************************************************80
1638 #
1639 ## TIMESTAMP_TEST tests TIMESTAMP.
1640 #
1641 #  Licensing:
1642 #
1643 #    This code is distributed under the GNU LGPL license. 
1644 #
1645 #  Modified:
1646 #
1647 #    03 December 2014
1648 #
1649 #  Author:
1650 #
1651 #    John Burkardt
1652 #
1653 #  Parameters:
1654 #
1655 #    None
1656 #
1657   import platform
1658 
1659   print ( '' )
1660   print ( 'TIMESTAMP_TEST:' )
1661   print ( '  Python version: %s' % ( platform.python_version ( ) ) )
1662   print ( '  TIMESTAMP prints a timestamp of the current date and time.' )
1663   print ( '' )
1664 
1665   timestamp ( )
1666 #
1667 #  Terminate.
1668 #
1669   print ( '' )
1670   print ( 'TIMESTAMP_TEST:' )
1671   print ( '  Normal end of execution.' )
1672   return
1673 
1674 if ( __name__ == '__main__' ):
1675   timestamp ( )
1676   compass_search_test ( )
1677   timestamp ( )

 

   
 
  1 Mon Jan 20 18:27:15 2020
  2 
  3 COMPASS_SEARCH_TEST
  4   Python version: 3.6.9
  5   Test the COMPASS_SEARCH library.
  6 
  7 BEALE_TEST:
  8   Python version: 3.6.9
  9   Test COMPASS_SEARCH with the Beale function.
 10 
 11   Initial point X0:
 12 
 13      0:             1
 14      1:             1
 15 
 16   F(X0) = 14.2031
 17 
 18   Estimated minimizer X1:
 19 
 20      0:       3.00013
 21      1:      0.500037
 22 
 23   F(X1) = 3.14079e-09, number of steps = 10685
 24 
 25   Initial point X0:
 26 
 27      0:             1
 28      1:             4
 29 
 30   F(X0) = 4624.45
 31 
 32   Estimated minimizer X1:
 33 
 34      0:      -15.6686
 35      1:       1.05996
 36 
 37   F(X1) = 0.547042, number of steps = 20000
 38 
 39   Correct minimizer X*:
 40 
 41      0:             3
 42      1:           0.5
 43 
 44   F(X*) = 0
 45 
 46 BEALE_TEST
 47   Normal end of execution.
 48 
 49 BOHACH1_TEST:
 50   Python version: 3.6.9
 51   Test COMPASS_SEARCH with the Bohachevsky function #1
 52 
 53   Initial point X0:
 54 
 55      0:           0.5
 56      1:             1
 57 
 58   F(X0) = 2.55
 59 
 60   Estimated minimizer X[1]:
 61 
 62      0:  -6.10352e-06
 63      1:   6.10352e-06
 64 
 65   F(X1) = 1.78466e-09, number of steps = 48
 66 
 67   Correct minimizer X*:
 68 
 69      0:             0
 70      1:             0
 71 
 72   F(X*) = 0
 73 
 74 BOHACH1_TEST
 75   Normal end of execution.
 76 
 77 BOHACH2_TEST:
 78   Python version: 3.6.9
 79   Test COMPASS_SEARCH with the Bohachevsky function #2.
 80 
 81   Initial point X0:
 82 
 83      0:           0.6
 84      1:           1.3
 85 
 86   F(X0) = 4.23635
 87 
 88   Estimated minimizer X1:
 89 
 90      0:             0
 91      1:   6.10352e-06
 92 
 93   F(X1) = 9.56917e-10, number of steps = 37
 94 
 95   Correct minimizer X*:
 96 
 97      0:             0
 98      1:             0
 99 
100   F(X*) = 0
101 
102 BOHACH2_TEST
103   Normal end of execution.
104 
105 BROYDEN_TEST:
106   Python version: 3.6.9
107   Test COMPASS_SEARCH with the Broyden function.
108 
109   Initial point X0:
110 
111      0:          -0.9
112      1:            -1
113 
114   F(X0) = 2.37332
115 
116   Estimated minimizer X1:
117 
118      0:      -0.51156
119      1:     -0.398187
120 
121   F(X1) = 0.0221649, number of steps = 32
122 
123   Correct minimizer X*:
124 
125      0:     -0.511547
126      1:     -0.398161
127 
128   F(X*) = 0.000144114
129 
130 BROYDEN_TEST
131   Normal end of execution.
132 
133 EXTENDED_ROSENBROCK_TEST:
134   Python version: 3.6.9
135   Test COMPASS_SEARCH with the extended Rosenbrock function.
136 
137   Initial point X0:
138 
139      0:          -1.2
140      1:             1
141      2:          -1.5
142      3:           1.2
143 
144   F(X0) = 140.7
145 
146   Estimated minimizer X1:
147 
148      0:       0.99538
149      1:      0.990771
150      2:      0.996094
151      3:      0.992194
152 
153   F(X1) = 3.66233e-05, number of steps = 10154
154 
155   Correct minimizer X*:
156 
157      0:             1
158      1:             1
159      2:             1
160      3:             1
161 
162   F(X*) = 0
163 
164 EXTENDED_ROSENBROCK_TEST
165   Normal end of execution.
166 
167 GOLDSTEIN_PRICE_TEST:
168   Python version: 3.6.9
169   Test COMPASS_SEARCH with the Goldstein-Price function.
170 
171   Initial point X0:
172 
173      0:          -0.5
174      1:          0.25
175 
176   F(X0) = 2738.74
177 
178   Estimated minimizer X1:
179 
180      0:     -0.600031
181      1:     -0.399969
182 
183   F(X1) = 30, number of steps = 67
184 
185   Initial point X0:
186 
187      0:            -4
188      1:             5
189 
190   F(X0) = 3.44437e+07
191 
192   Estimated minimizer X1:
193 
194      0:       1.19999
195      1:           0.8
196 
197   F(X1) = 840, number of steps = 72
198 
199   Correct minimizer X*:
200 
201      0:             0
202      1:            -1
203 
204   F(X*) = 3
205 
206 GOLDSTEIN_PRICE_TEST
207   Normal end of execution.
208 
209 HIMMELBLAU_TEST:
210   Python version: 3.6.9
211   Test COMPASS_SEARCH with the Himmelblau function.
212 
213   Initial point X0:
214 
215      0:             1
216      1:             1
217 
218   F(X0) = 106
219 
220   Estimated minimizer X1:
221 
222      0:       3.00001
223      1:       1.99999
224 
225   F(X1) = 1.2666e-09, number of steps = 39
226 
227   Initial point X0:
228 
229      0:            -1
230      1:             1
231 
232   F(X0) = 130
233 
234   Estimated minimizer X1:
235 
236      0:      -2.80513
237      1:       3.13131
238 
239   F(X1) = 2.65844e-09, number of steps = 41
240 
241   Initial point X0:
242 
243      0:            -1
244      1:            -1
245 
246   F(X0) = 170
247 
248   Estimated minimizer X1:
249 
250      0:       -3.7793
251      1:      -3.28318
252 
253   F(X1) = 3.91873e-09, number of steps = 48
254 
255   Initial point X0:
256 
257      0:             1
258      1:            -1
259 
260   F(X0) = 146
261 
262   Estimated minimizer X1:
263 
264      0:       3.58442
265      1:      -1.84813
266 
267   F(X1) = 1.05849e-09, number of steps = 44
268 
269   Correct Himmelblau minimizer X1*:
270 
271      0:             3
272      1:             2
273 
274   F(X*) = 0
275 
276   Correct Himmelblau minimizer X2*:
277 
278      0:       3.58439
279      1:      -1.84813
280 
281   F(X*) = 7.81168e-08
282 
283   Correct Himmelblau minimizer X3*:
284 
285      0:      -3.77934
286      1:      -3.28317
287 
288   F(X*) = 7.61596e-08
289 
290   Correct Himmelblau minimizer X4*:
291 
292      0:      -2.80512
293      1:       3.13134
294 
295   F(X*) = 3.04269e-08
296 
297 HIMMELBLAU_TEST
298   Normal end of execution.
299 
300 LOCAL_TEST:
301   Python version: 3.6.9
302   Test COMPASS_SEARCH with the Local function.
303 
304   Initial point X0:
305 
306      0:             1
307      1:             1
308 
309   F(X0) = 3.33077e+06
310 
311   Estimated minimizer X1:
312 
313      0:      0.789813
314      1:          0.25
315 
316   F(X1) = 6.88507, number of steps = 44
317 
318   Correct local minimizer X*:
319 
320      0:      0.285805
321      1:      0.279326
322 
323   F(X*) = 5.92256
324 
325   Initial point X0:
326 
327      0:           -15
328      1:           -35
329 
330   F(X0) = 1.49636e+08
331 
332   Estimated minimizer X1:
333 
334      0:      -21.0423
335      1:      -36.7357
336 
337   F(X1) = 0.90452, number of steps = 13385
338 
339   Correct global minimizer X*:
340 
341      0:      -21.0267
342      1:        -36.76
343 
344   F(X*) = 1.42426e-06
345 
346 LOCAL_TEST
347   Normal end of execution.
348 
349 MCKINNON_TEST:
350   Python version: 3.6.9
351   Test COMPASS_SEARCH with the McKinnon function.
352 
353   Initial point X0:
354 
355      0:       0.84307
356      1:      -0.59307
357   PHI = 10, TAU = 1, THETA = 15
358 
359   F(X0) = 12.4047
360 
361   Estimated minimizer X1:
362 
363      0:   1.61316e-05
364      1:     -0.499998
365 
366   F(X1) = -0.249758, number of steps = 35
367 
368   Correct minimizer X*:
369 
370      0:             0
371      1:          -0.5
372 
373   F(X*) = -0.25
374 
375   Initial point X0:
376 
377      0:       0.84307
378      1:      -0.59307
379   PHI = 60, TAU = 2, THETA = 6
380 
381   F(X0) = 4.02327
382 
383   Estimated minimizer X1:
384 
385      0:   1.61316e-05
386      1:     -0.499998
387 
388   F(X1) = -0.25, number of steps = 35
389 
390   Correct minimizer X*:
391 
392      0:             0
393      1:          -0.5
394 
395   F(X*) = -0.25
396 
397   Initial point X0:
398 
399      0:       0.84307
400      1:      -0.59307
401   PHI = 4000, TAU = 3, THETA = 6
402 
403   F(X0) = 3.35402
404 
405   Estimated minimizer X1:
406 
407      0:   1.61316e-05
408      1:     -0.499998
409 
410   F(X1) = -0.25, number of steps = 36
411 
412   Correct minimizer X*:
413 
414      0:             0
415      1:          -0.5
416 
417   F(X*) = -0.25
418 
419 MCKINNON_TEST
420   Normal end of execution.
421 
422 POWELL_TEST:
423   Python version: 3.6.9
424   Test COMPASS_SEARCH with the Powell function.
425 
426   Initial point X0:
427 
428      0:             3
429      1:            -1
430      2:             0
431      3:             1
432 
433   F(X0) = 55
434 
435   Estimated minimizer X1:
436 
437      0:   0.000164795
438      1:  -2.44141e-05
439      2:   1.83105e-05
440      3:   9.76563e-05
441 
442   F(X1) = 2.08244e-08, number of steps = 289
443 
444   Correct minimizer X*:
445 
446      0:             0
447      1:             0
448      2:             0
449      3:             0
450 
451   F(X*) = 0
452 
453 POWELL_TEST
454   Normal end of execution.
455 
456 ROSENBROCK_TEST:
457   Python version: 3.6.9
458   Test COMPASS_SEARCH with the Rosenbrock function.
459 
460   Initial point X0:
461 
462      0:          -1.2
463      1:             1
464 
465   F(X0) = 24.2
466 
467   Estimated minimizer X1:
468 
469      0:       0.99538
470      1:      0.990771
471 
472   F(X1) = 2.13561e-05, number of steps = 4981
473 
474   Correct minimizer X*:
475 
476      0:             1
477      1:             1
478 
479   F(X*) = 0
480 
481 ROSENBROCK_TEST
482   Normal end of execution.
483 
484 COMPASS_SEARCH_TEST:
485   Normal end of execution.
486 Mon Jan 20 18:27:16 2020

 

   
posted on 2021-07-07 14:12  海阔凭鱼跃越  阅读(178)  评论(0编辑  收藏  举报