论文笔记(7)-"Local Newton: Reducing Communication Bottleneck for Distributed Learning"
Main idea
Authors demonstrated a second-order optimization method and incorporated the curvature information to reduce the communication cost.
Algorithm
They have proposed two algorithm: in the first algorithm, there is only once gradient direction computation, i.e.,
Assumption and convergence analysis
In their work, only the weighted function
Actually, they work on the homogeneous setting: local data sets comes from the same distribution and
As they said the sketch of the proof is following:
- Reduce
to - Bound
( ) - local gradient
is closed to by perturbed iterate analysis
conclusion
-
The underlying homogeneous assumption is impractical in federated learning.
-
For DL, solving the hessian matrix is difficult and approximation is more applicable.
-
At last, I want to briefly introduce the Giant: At first, the server will broadcast the weighted gradient
to users to compute local Hessian matrix.
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· Manus的开源复刻OpenManus初探
· AI 智能体引爆开源社区「GitHub 热点速览」
· C#/.NET/.NET Core技术前沿周刊 | 第 29 期(2025年3.1-3.9)
· 从HTTP原因短语缺失研究HTTP/2和HTTP/3的设计差异