论文笔记(7)-"Local Newton: Reducing Communication Bottleneck for Distributed Learning"

Main idea

Authors demonstrated a second-order optimization method and incorporated the curvature information to reduce the communication cost.

Algorithm

They have proposed two algorithm: in the first algorithm, there is only once gradient direction computation, i.e., L=12wIvu9.png

L1 in the second algorithm, called adaptive Local Newton:

2wTGRO.png

Assumption and convergence analysis

In their work, only the weighted function f is required to be M smooth and κ strongly convex and the norm of the local gradient fi(x) is upper bounded.

Actually, they work on the homogeneous setting: local data sets comes from the same distribution and Xi is the submatrix of X. Based on it, they manifested local function fi is also M(1ϵ) smooth and κ(1+ϵ) strongly convex (Lemma 4.1) with probability at least 1δ.

As they said the sketch of the proof is following:

  1. Reduce f(w¯t+1)f(w¯t) to 1Kifi(wt+1i)fi(wti)
  2. Bound fi(wt+1i)fi(wti)φgti2(Lemma A.2)
  3. local gradient gti is closed togt by perturbed iterate analysis

2wvclt.png

conclusion

  • The underlying homogeneous assumption is impractical in federated learning.

  • For DL, solving the hessian matrix is difficult and approximation is more applicable.

  • At last, I want to briefly introduce the Giant: At first, the server will broadcast the weighted gradient g to users to compute local Hessian matrix.

    20pYND.png

Reference

posted @   Neo_DH  阅读(74)  评论(0编辑  收藏  举报
编辑推荐:
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
阅读排行:
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· Manus的开源复刻OpenManus初探
· AI 智能体引爆开源社区「GitHub 热点速览」
· C#/.NET/.NET Core技术前沿周刊 | 第 29 期(2025年3.1-3.9)
· 从HTTP原因短语缺失研究HTTP/2和HTTP/3的设计差异
点击右上角即可分享
微信分享提示