最优化算法4.0【信赖域方法】

思路:线搜索最优化算法,一般是先确定迭代方向(下降方向),然后确定迭代步长;
信赖域方法直接求得迭代位移;

算法分析

\(k\)次迭代,确定迭代位移的问题为(信赖域子问题):

\[min q_k(d)=g_k^Td+\frac{1}{2}d^TB_kd_k \]

\[s.t.\quad ||d||\leq \Delta_k \]

其中\(\Delta_k\)为信赖域半径

对于求得的迭代位移,实际下降量:

\[\Delta f_k=f(x_k)-f(x_k+d_k) \]

估计下降量:

\[\Delta q_k=q_k(0)-q_k(d_k) \]

定义:

\[r_k=\frac{\Delta f_k}{\Delta q_k} \]

一般情况,\(\Delta q_k>0\),因此,当\(r_k\)小于0时,表示求得的\(x_{k+1}\)不是下降方向;需要缩小信赖域重新求解;当\(r_k\)趋近于1时,表明对于函数的二次近似具有较高的精度,可以扩大信赖域;

算法

\(输入:0<\eta_1<\eta_2<1,0<\tau_1<1<\tau_2,初始点x,初始hesse阵近似阵:B_0,容许误差:\epsilon;信赖域半径上限:\tilde{\Delta},初始信赖域半径:\Delta_0\in(0,\tilde{\Delta}]\)

maxite=100; 最大迭代次数
g=gfun(x);
k=0;
while k < maxite and g > \(\epsilon\)
\(\qquad\) solve the child question of optizimation,get \(x_{k+1}\);
\(\qquad\)get \(r_k\);
\(\qquad\)if \(r_k <\eta_1\)
\(\qquad\) \(\qquad\) \(\Delta_{k+1}\)=\(\tau_1\Delta_k\);
\(\qquad\) elif \(\eta_1<r_k<\eta_2\)
\(\qquad\) \(\qquad\) \(\Delta_{k+1}=\Delta_{k};\)
\(\qquad\) else
\(\qquad\) \(\qquad\) \(\Delta_{k+1}=\tau_2\Delta_k;\)
\(\qquad\) endif
\(\qquad\) if \(r_k<\eta_1\)
\(\qquad\) \(\qquad\) \(x_{k+1}=x;\)
\(\qquad\) \(\qquad\) \(B_{k+1}=B_k;\)
\(\qquad\) else
\(\qquad\) \(\qquad\) \(refresh B_k, get B_{k+1};\)
\(\qquad\) endif
\(\qquad\) \(k=k+1;\)
\(\qquad\) \(g=gfun(x_{k+1});\)
endwhile

\(输出:最优解x\)

posted @ 2020-08-22 14:36  ldfm  阅读(1319)  评论(0编辑  收藏  举报