几种求解线性方程组的迭代解法

共轭梯度法

共轭梯度法(Conjugate Gradient method, CG)是一种求解线性方程组的迭代解法。

基本思路

假设 \(A\)\(n\)实对称正定矩阵,需求解线性方程组

\[Ax=b \]

定义 \(\left(u, v\right) = u^T v\),假设 \(n\) 个方向向量 \(d_1, d_2, \cdots, d_n\) 满足

\[\forall i\neq j, \ \left(d_i, Ad_j\right) = 0 \]

则称它们关于 \(A\) 共轭。设 \(A=LL^T\),则 \(\left\{L^T d_i\right\}\) 构成一组 \(\mathbb{R}^n\) 的正交基,\(\left\{d_i\right\}\) 线性无关。
\(x_0\) 为初值,真实解 \(x\) 必能表示为 \(x=x_0+\sum\limits_{i=1}^n \alpha_i d_i\) 的形式。
定义第 \(k\) 步迭代的解 \(x_k=x_0+\sum\limits_{i=1}^k \alpha_i d_i\),误差 \(e_k=x-x_k=\sum\limits_{i=k+1}^n \alpha_i d_i\),残差 \(r_k=b-Ax_k=Ae_k\)

则有

\[\left(r_{k-1}, d_k\right) = \left(e_{k-1}, Ad_k\right) = \alpha_k \left(d_k, Ad_k\right) \]

\[\alpha_k = \dfrac{\left(r_{k-1}, d_k\right)}{\left(d_k, Ad_k\right)} \]

这种方法称为共轭方向法(Conjugate Direction Method)。

为得到向量集 \(\left\{d_k\right\}\),假设有 \(n\) 个线性无关向量 \(\left\{u_i\right\}\),可令 \(d_1=u_1, d_k=u_k+\sum\limits_{i=1}^{k-1}\beta_{ki}d_i\)

为使 \(\left\{d_k\right\}\) 关于 \(A\) 共轭,其中 \(\beta_{ki}\) 应满足

\[\beta_{ki}=-\dfrac{\left(d_i, Au_k\right)}{\left(d_i, Ad_i\right)}=-\dfrac{\left(u_k, A\left(x_i - x_{i-1}\right)\right)}{\alpha_i \left(d_i, Ad_i\right)}=\dfrac{\left(u_k, r_i - r_{i-1}\right)}{\alpha_i \left(d_i, A d_i\right)} \]

\(i\le j\),则

\[\left(d_i, r_j\right)=\left(d_i, A e_j\right)=\left(d_i, A\sum\limits_{k=j+1}^n \alpha_k d_k\right) = 0 \]

\[d_k=u_k+\sum\limits_{i=1}^{k-1}\beta_{ki}d_i \]

\(i\le j\)

\[\left(u_i, r_j\right) = \left(d_i^T - \sum\limits_{k=1}^{i-1}\beta_{ik}d_k, r_j\right) = 0 \]

若取 \(u_k=r_{k-1}\),则

\[\beta_{ki}=\begin{cases} \dfrac{\left(r_i, r_i\right)}{\alpha_i \left(d_i, Ad_i\right)}, & k=i+1\\ 0, & k\gt i+1\end{cases} \]

特别地,不需要担心 \(u_k=r_k=0\) 导致 \(\left\{u_i\right\}\) 不再线性相关的问题,因为 \(r_k=0\) 时已收敛。

\(k=i+1\) 时,

\[\alpha_k \left(d_k, Ad_k\right) = \left(d_k, r_i\right) = \left(r_i, r_i + \sum\limits_{j=1}^{k-1}\beta_{kj}r_{j-1}\right) = \left(r_i, r_i\right) \]

所以

\[\beta_{ki}=\dfrac{\left(r_i, r_i\right)}{\left(r_{i-1}, r_{i-1}\right)} \]

伪代码如下:

r[0] = b - A * x[0]
d[1] = r[0]
k = 0
while 未达到收敛
	k += 1
	alpha[k] = (r[k-1], r[k-1]) / (d[k], A * d[k])
	x[k] = x[k-1] + alpha[k] * d[k]
	r[k] = r[k-1] - alpha[k] * A * d[k]
	d[k+1] = r[k] + (r[k], r[k]) / (r[k-1], r[k-1]) * d[k]

收敛性分析

定义向量的 A-范数 \(\lVert x\rVert_A = \sqrt{\left(x, Ax\right)}\),则

\[\lVert e\left(k\right)\rVert_A^2 = \sum\limits_{i=k+1}^n \alpha_i^2 \lVert d_k\rVert_A^2 \]

即:在 Krylov 子空间 \(x_0+\left\{r_0, Ar_0, \cdots, A^{k-1} r_0\right\}\) 中,\(x_k\) 是使得 \(\lVert e_k\rVert_A\) 最小的点。由于 \(d_k\) 彼此正交,\(x_k\) 同时也是子空间内使得 \(\varphi\left(x\right)\) 最小的点。

\(A\) 的特征值为 \(\left\{\lambda_i\right\}\),由 A 对称正定得 \(\lambda_i\gt 0\)。对应的归一化的特征向量为 \(\left\{v_i\right\}\)(相同特征值算多次),\(r_0=\sum\limits_{i} \zeta_i v_i\),则

\[e_k = \sum_i P_k\left(\lambda_i\right)\zeta_i v_i \]

其中 \(P_k\) 代表一个 \(k\) 次多项式,第 \(i\) 项系数为 \(\alpha_i\)(常数项为 \(1\))。

\[\lVert e_k\rVert_A^2 = \sum_i P_k^2\left(\lambda_i\right)\lambda_i\zeta_i^2 \le \lVert e_0\rVert_A^2 \max\limits_i P_k^2\left(\lambda_i\right) \]

\(P_k\) 取某一确定多项式,则可得到 \(\lVert e_k\rVert_A^2\) 的某一确定上界。

设特征值最大为 \(\lambda_1\),最小为 \(\lambda_2\),矩阵条件数 \(\kappa=\dfrac{\lambda_1}{\lambda_2}\)

\(k\) 阶切比雪夫多项式

\[T_k\left(x\right)=\begin{cases}\dfrac{1}{2}\left[\left(x+\sqrt{x^2-1}\right)^k+\left(x-\sqrt{x^2-1}\right)^k\right], & \lvert x\rvert\ge 1\\ \cos\left(k\arccos x\right), & \lvert x\rvert\lt 1\end{cases} \]

若取

\[P_k\left(\lambda\right)=\dfrac{T_k\left(\dfrac{\lambda_1 + \lambda_2 - 2\lambda}{\lambda_1 - \lambda_2}\right)}{T_k\left(\dfrac{\lambda_1 + \lambda_2}{\lambda_1 - \lambda_2}\right)} \]

\(P_k\left(0\right)=1\) 满足常数项为 \(1\) 的条件,且

\[\lvert P_k\left(\lambda\right)\rvert\le\dfrac{1}{T_k\left(\dfrac{\kappa+1}{\kappa-1}\right)}=\dfrac{2\left(\kappa-1\right)^k}{\left(\sqrt{\kappa}+1\right)^{2k}+\left(\sqrt{\kappa}-1\right)^{2k}} \]

可得

\[\lVert e_k\rVert_A \le 2\left(\dfrac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^k\lVert e_0\rVert_A \]

即在A-范数下收敛阶不超过 \(\dfrac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\)

双共轭梯度法

双共轭梯度法(Biconjugate Gradient mothod, BiCG)可解决 \(A\) 是非奇异实方阵的情况。

假设 A 为 \(n\) 阶非奇异实方阵,需求解两个线性方程组

\[\begin{cases}Ax=b\\ A^Tx^{\star}=b^{\star}\end{cases} \]

\(A\) 对称正定的情形类似,假设存在两组方向向量 \(\left\{d_1, d_2, \cdots, d_n\right\}; \left\{d_1^{\star}, d_2^{\star}, \cdots, d_n^\star\right\}\) 满足

\[\forall i\neq j, \left(d_i^{\star}, A d_j\right) = \left(d_i, A^T d_j^{\star}\right) = 0 \]

\(x=x_0+\sum\limits_{i=1}^n \alpha_i d_i, x^{\star} = x_0^{\star} + \sum\limits_{i=1}^n \alpha_i^{\star} d_i^{\star}\)\(x_k, x_k^{\star},e_k, e_k^{\star}, r_k, r_k^{\star}, u_k, u_k^{\star}, \beta_{ki}, \beta_{ki}^{\star}\) 定义同上。则

\[\begin{cases}\alpha_k = \dfrac{\left(d_k^{\star}, r_{k-1}\right)}{\left(d_k^{\star}, Ad_k\right)}\\ \alpha_k^{\star} = \dfrac{\left(d_k, r_{k-1}^{\star}\right)}{\left(d_k, A^Td_k^{\star}\right)}\end{cases} \]

仍令 \(u_k = r_{k-1}, u_k^{\star} = r_{k-1}^{\star}\),则 \(k=i+1\)

\[\beta_{ki} = \beta_{ki}^{\star} = \dfrac{\left(r_i^{\star}, r_i\right)}{\left(r_{i-1}^{\star}, r_{i-1}\right)} \]

\(k\neq i+1\)\(\beta_{ki} = \beta_{ki}^{\star}=0\)

正常情况下只需要一个方程组的解,不需要得到 \(x^{\star}\),可写出伪代码如下:

r1[0] = b - A * x[0]
choose r2[0] such that (r1[0], r2[0]) ≠ 0
d1[1] = r1[0]  d2[1] = r2[0]
k = 0
while 未达到收敛
	k += 1
	alpha[k] = (r1[k-1], r2[k-1]) / (d2[k], A * d1[k])
	x1[k] = x1[k-1] + alpha[k] * d1[k]
	r1[k] = r1[k-1] - alpha[k] * A * d1[k]
	r2[k] = r2[k-1] - alpha[k] * A^T * d2[k]
	beta[k] = (r1[k], r2[k]) / (r1[k-1], r2[k-1])
	d1[k+1] = r1[k] + beta[k] * d1[k]
	d2[k+1] = r2[k] + beta[k] * d2[k]

关于这个方法的收敛性,

In the general case nothing is minimised at all, which can be regarded as a theoretical weakness. In practice however, things turn out to be far less serious than we would expect, as numerical experiments have shown.

\(Ax=b\) 更改为 \(A^T Ax=A^T b\) 再应用共轭梯度法可得到一个收敛阶不超过 \(\dfrac{\kappa-1}{\kappa+1}\) 的算法,但是太慢了。

共轭梯度二乘法

共轭梯度二乘法(Conjugate Gradient Squared method, CGS)是对双共轭梯度法的改进。
在双共轭梯度法中,可将 \(d_k, d_k^{\star}, r_k, r_k^{\star}\) 表示为

\[\begin{cases}d_k = D_{k-1}\left(A\right)r_0\\d_k^{\star} = D_{k-1}\left(A^T\right)r_0^{\star}\\r_k = R_k\left(A\right)r_0\\r_k^{\star} = R_k\left(A^T\right)r_0^{\star}\end{cases} \]

其中 \(D_k, R_k\) 是次数等于 \(k\) 的多项式,令 \(i=k-1\)\(\beta_{ki}=\beta_i\),则 \(D_k, R_k\) 满足递推关系

\[\begin{cases}D_k\left(A\right) = R_k\left(A\right) + \beta_k D_{k-1}\left(A\right)\\R_k\left(A\right) = R_{k-1}\left(A\right) - \alpha_k A D_{k-1}\left(A\right)\end{cases} \]

\[\begin{cases}p_k = D_k^2\left(A\right)r_0\\g_k = R_k^2\left(A\right)r_0\\u_k = R_k\left(A\right) D_{k}\left(A\right) r_0\\q_k = R_k\left(A\right) D_{k-1}\left(A\right) r_0\end{cases} \]

\(p_k, q_k, g_k, u_k\) 满足递推关系

\[\begin{cases}q_k=u_{k-1}-\alpha_{k} Ap_{k-1}\\g_k=g_{k-1}-\alpha_k A\left(u_{k-1}+q_k\right)\\u_k = g_k + \beta_k q_k\\p_k = u_k + \beta_k\left(q_k + \beta_k p_{k-1}\right)\end{cases} \]

\(\left(r_k^{\star}, r_k\right) = \left(R\left(A^T\right)r_0^{\star}\right)^T R_k\left(A\right)r_0 = \left(r_0^{\star}, g_k\right), \left(d_k^{\star}, Ad_k\right) = \left(r_0^{\star}, AD_{k-1}^2\left(A\right)r_0\right)\) 可得 \(\alpha_k, \beta_k\) 的表达式

\[\begin{cases}\alpha_k = \dfrac{\left(r_0^{\star}, g_{k-1}\right)}{\left(r_0^{\star}, Ap_{k-1}\right)}\\ \beta_k = \dfrac{\left(r_0^{\star}, g_k\right)}{\left(r_0^{\star}, g_{k-1}\right)}\end{cases} \]

在双共轭梯度法中 \(x_k = x_{k-1} + \alpha_k d_k\),而因为 \(\lVert R_k\left(A\right)r_0\rVert_2\) 大概率比 \(\lVert R_k^2\left(A\right)r_0\rVert_2\) 大,所以此处如果用

\[x_k = x_{k-1} + \alpha_k\left(u_{k-1} + q_k\right) \]

更新 \(x_k\) 则大概率可以获得更快的收敛速度。
伪代码如下:

q = 0
p = g = u = A - b * x
choose r such that (r, A - b * x) ≠ 0
rg = (r, g)
k = 0
while 未达到收敛
	k += 1
	alpha = rg / (r, A * p)
	q = u - alpha * A * p
	g -= alpha * A * (u + q)
	x += alpha * (u + q)
	beta = (r, g) / rg
	rg *= beta
	u = g + beta * q
	p = u + beta * (q + beta * p)

稳定双共轭梯度法

稳定双共轭梯度法(Biconjugate Gradient Stabilized method, BiCGSTAB)是对共轭梯度二乘法的改进。

基本思路

观察双共轭梯度法,可以发现实际上只需要满足

\[\forall i\lt j, \left(d_i^{\star}, Ad_j\right) = 0 \]

这个弱一些的条件即可。

\[\begin{cases}d_k = D_{k-1}\left(A\right)r_0\\ d_k^{\star} = D_{k-1}^{\star}\left(A^T\right)r_0\\ r_k = R_k\left(A\right)r_0\end{cases} \]

其中 \(D_k, R_k, D_k^{\star}\) 为次数等于 \(k\) 的多项式。

\[\begin{cases}r_{k-1} = A\sum\limits_{i=k}^n\alpha_i d_i\\ d_k = r_{k-1} + \sum\limits_{i=1}^{k-1}\beta_{ki}d_i\end{cases} \]

可得

\[\begin{cases} \alpha_k = \dfrac{\left(d_k^{\star}, r_{k-1}\right)}{\left(d_k^{\star}, Ad_k\right)} = \dfrac{\left(r_0^{\star}, D_{k-1}^{\star}\left(A\right)R_{k-1}\left(A\right)r_0\right)}{\left(r_0^{\star}, D_{k-1}^{\star}\left(A\right)D_{k-1}\left(A\right)Ar_0\right)}\\ \beta_{ki} = -\dfrac{\left(d_i^{\star}, Ar_{k-1}\right)}{\left(d_i^{\star}, Ad_i\right)} = -\dfrac{\left(r_0^{\star}, D_{i-1}^{\star}\left(A\right)R_{k-1}\left(A\right)Ar_0\right)}{\left(r_0^{\star}, D_{i-1}^{\star}\left(A\right)D_{i-1}\left(A\right)Ar_0\right)} \end{cases} \]

由条件得

\[\forall i\lt j, \left(d_i^{\star}, Ad_j\right) = \left(r_0^{\star}, D_{i-1}^{\star}\left(A\right)D_{j-1}\left(A\right)Ar_0\right) = 0 \]

\[\forall i\le j, \left(d_i^{\star}, r_j\right) = \left(r_0^{\star}, D_{i-1}^{\star}\left(A\right)R_j\left(A\right)r_0\right) = 0 \]

因此对任意次数小于 \(k\) 的多项式 \(S\)

\[\left(r_0^{\star}, S\left(A\right)R_j\left(A\right)r_0\right)=0 \]

\(i\lt k-1\) 时,\(\beta_{ki}=0\),可令 \(\beta_i = \beta_{ki}\)

因此 \(D_k^{\star}\) 实际上可以取任意的 \(k\) 次多项式,而在共轭梯度二乘法中 \(D_k^{\star}=D_k\) 并没有很好地利用这一性质。

如果令 \(D_k^{\star}\left(A\right) = D_{k-1}^{\star}\left(I-\omega_k A\right)\),则 \(D_k, R_k, D_k^{\star}\) 满足递推关系

\[\begin{cases}D_k\left(A\right) = R_k\left(A\right) + \beta_k D_{k-1}\left(A\right)\\ R_k\left(A\right) = R_{k-1}\left(A\right) - \alpha_k A D_{k-1}\left(A\right)\\ D_k^{\star}\left(A\right) = D_{k-1}^{\star}\left(A\right)\left(I-\omega_k A\right)\end{cases} \]

\[\begin{cases}p_k = D_k^{\star}\left(A\right)D_k\left(A\right)r_0\\ g_k = D_k^{\star}\left(A\right)R_k\left(A\right)r_0\\ s_k = D_{k-1}^{\star}\left(A\right)R_k\left(A\right)r_0\end{cases} \]

\(p_k, g_k, s_k\) 满足递推关系

\[\begin{cases}s_k = g_{k-1} - \alpha_k A p_{k-1}\\g_k = s_k - \omega_k A s_k\\ p_k = g_k + \beta_k\left(p_{k-1} - \omega_k Ap_{k-1}\right)\\\end{cases} \]

其中

\[\begin{cases}\alpha_k = \dfrac{\left(r_0^{\star}, g_{k-1}\right)}{\left(r_0^{\star}, Ap_{k-1}\right)}\\ \beta_k = \dfrac{\left(r_0^{\star}, g_k\right)}{\omega_k\left(r_0^{\star}, Ap_{k-1}\right)}\end{cases} \]

类似共轭梯度二乘法中的操作,如果令 \(x_k = x_{k-1} + \alpha_k p_{k-1} + \omega_k s_k\) 则可以保证 \(g_k = r_0 - Ax_k\) 以获得更快的收敛速度。

如果选取 \(\omega_k=\dfrac{\left(s_k, As_k\right)}{\left(As_k, As_k\right)}\),则可以令 \(\lVert g_k\rVert_2\) 尽可能小,以提高算法的稳定性。

伪代码如下:

g = p = b - A * x
choose r such that (r, A - b * x) ≠ 0
while 未达到收敛
	k += 1
	alpha = (r, g) / (r, A * p)
	s = g - alpha * A * p
	omega = (s, A * s) / (A * s, A * s)
	g = s - omega * A * s
	x += alpha * p + omega * s
	beta = (r, g) / (omega * (r, A * p))
	p = g + beta * (p - omega * A * p)

带有预处理的稳定双共轭梯度法

预处理过程(preconditioning)可用于改善矩阵的收敛性。

假设用某种预处理方法得出 \(A\approx K_1K_2\),其中 \(K_1, K_2\) 可实现快速求逆,对方程 \(\left(K_1^{-1}AK_2^{-1}\right)x=K_1^{-1}b\) 应用稳定双共轭梯度法,并设

\[\begin{cases} x' = K_2^{-1}x\\ r_0' = K_1r_0\\ r_0^{\star'} = \left(K_1^{-1}\right)^T r_0^{\star}\\ s_k' = K_1 s_k\\ r_k' = K_1 r_k\\ g_k' = K_1 g_k\\ p_k' = K_1 p_k \end{cases} \]

\(p_k', g_k', s_k'\) 满足递推关系

\[\begin{cases} s_k' = g_{k-1}'-\alpha_k A K_2^{-1} K_1^{-1} p_{k-1}'\\ g_k' = s_k'-\omega_k A K_2^{-1}K_1^{-1} s_k'\\ p_k' = g_k+\beta_k\left(p_{k-1}-\omega_k A K_2^{-1}K_1^{-1} p_{k-1}'\right) \end{cases}\]

其中

\[\begin{cases} \alpha_k' = \dfrac{\left(r_0^{\star'}, g_{k-1}'\right)}{\left(r_0^{\star'}, AK_2^{-1}K_1^{-1}p_{k-1}'\right)}\\ \beta_k' = \dfrac{\left(r_0^{\star'}, g_k'\right)}{\omega_k\left(r_0^{\star}, AK_2^{-1}K_1^{-1}p_{k-1}'\right)}\\ \end{cases}\]

如果选取 \(\omega_k=\dfrac{\left(s_k, AK_2^{-1}K_1^{-1}s_k\right)}{\left(AK_2^{-1}K_1^{-1}s_k, AK_2^{-1}K_1^{-1}s_k\right)}\),则可以令 \(\lVert g_k\rVert_2\) 尽可能小,以提高算法的稳定性。

伪代码如下:

g = p = b - A * x
choose r such that (r, A - b * x) ≠ 0
while 未达到收敛
	k += 1
	p0 = K2 ^ (-1) * K1 ^ (-1) * p
	alpha = (r, g) / (r, A * p0)
	s = g - alpha * A * p0
	s0 = K2 ^ (-1) * K1 ^ (-1) * s
	omega = (s, A * K2 ^ (-1) * K1 ^ (-1) * s0) / (A * K2 ^ (-1) * K1 ^ (-1) * s0, A * K2 ^ (-1) * K1 ^ (-1) * s0)
	g = s - omega * A * s0
	x += alpha * p0 + omega * s0
	beta = (r, g) / (omega * (r, A * p0))
	p = g + beta * (p - omega * A * p0)

广义最小残量法

广义最小残量法(Generalized Minimal Residual method, GMRES)是一种求解线性方程组的更简易的解法。

假设 A 为 \(n\) 阶非奇异实方阵,需求解线性方程组

\[Ax = b \]

\(x_0\) 为初值,\(r_0 = b - Ax_0\),前 \(k\) 次迭代中对向量集

\[\left\{r_0, Ar_0, A^2r_0, \cdots, A^{k-1}r_0\right\} \]

正交化可得到 \(k\) 个正交归一的向量

\[\left\{q_1, q_2, \cdots, q_k\right\} \]

\(x_k = x_0 + \sum\limits_{i=1}^k \alpha_i q_i\),则 \(r_k = r_0 - \sum\limits_{i=1}^k \alpha_i Aq_i\),假设

\[Q_k = \left[q_1, q_2, \cdots, q_k\right] \]

\(AQ_k = Q_{k+1}H_k\),其中 \(H_k\)\(\left(k+1\right)\times k\) 的上 Hessenberg 矩阵(若 \(k+1\) 个向量线性相关则令最后一行为 \(0\),并且在这轮迭代后结束即可)。
残差的二范数

\[\begin{aligned} \lVert r_k\rVert_2 &= \lVert r_0 - AQ_k \alpha\rVert_2\\ &= \lVert \lVert r_0\rVert_2 e_1 - H_k\alpha\rVert_2 \end{aligned} \]

其中 \(e_1\)\(n+1\) 阶向量,第一个分量为 \(1\),其余为 \(0\)
则只需求解一个最小二乘问题即可得到 \(\alpha\)\(x_k\)
收敛性:假设 \(A\) 可对角化为 \(A=X\Lambda X^{-1}\),第 \(k\) 轮残差 \(r_k\)\(P_k\) 表示复数域常数项为 \(1\)、度数不超过 \(k\) 的多项式的集合,则类似共轭梯度法

\[\dfrac{\lVert r_k\rVert_2}{\lVert r_0\rVert_2} \le \inf_{p\in P_k}\lVert p\left(A\right)\rVert_2 \le \kappa\left(X\right)\inf_{P\in P_k}\sup_{\lambda\in\Lambda\left(A\right)}\lVert p\left(\lambda\right)\rVert \]

如果 \(A\) 的对称部分 \(M = \dfrac{1}{2}\left(A + A^T\right)\) 为正定矩阵,则 \(A\) 的所有特征值的实部均大于 \(0\)。此时根据复杂的推导可以得到

\[\dfrac{\lVert r_k\rVert_2}{\lVert r_0\rVert_2} \le \left(1-\dfrac{\lambda_{min}^2\left(M\right)}{\lambda_{max}\left(A^TA\right)}\right)^{k/2} \]

\(A\) 本身即对称正定时,直接取 \(P_k\) 为切比雪夫多项式即可得到

\[\dfrac{\lVert r_k\rVert_2}{\lVert r_0\rVert_2}\le 2\left(\dfrac{\sqrt\kappa - 1}{\sqrt\kappa+1}\right)^k \]

广义最小残量法每步的运算次数与之前的迭代次数成正比,为平衡时间与收敛性可能需要定期重构。

posted @ 2024-04-12 20:58  Binary_Search_Tree  阅读(29)  评论(0编辑  收藏  举报