LLaMA 2
0 Introduction
What's new
- Rotary Position Embedding (RoPE)
- RMS Norm
- Grouped Query Attention + KV Cache
- SwiGLU
Diagram prospect


1 Model Architecture
1.1 Rotary Position Embedding
Paper: ROFORMER: ENHANCED TRANSFORMER WITH ROTARY POSITION EMBEDDING

1.2 RMS Norm
1.3 Grouped Query Attention + KV Cache
<1> Grouped Query Attention
GQA is the trade-off between Efficiency and Accuracy.
- Efficiency: MHA < GQA < MQA
- Accuracy: MHA > GQA > MQA
Figures from GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints


<2> KV Cache

1.4 SwiGLU
SwiGLU means Swish(also refers to SiLU) and Gated Linear Unit, which is commonly used in the feed forward network of LLaMA 2, Mixtral 7B, Mixtral 8×7B.

import torch.nn as nn
import torch.nn.functional as F
class MLP(nn.Module):
def __init__(self, config):
self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size)
self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size)
self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size)
def forward(self, x):
hidden_states = self.down_proj(F.silu(self.gate_proj(x), dim = -1) * self.up_proj(x))
return hidden_states
Reference
Video 1: Llama 2 模型结构解析 - CodeLearner | Bilibili
Blog 1: Llama 2详解 - CodeLearner | Zhihu
Blog 2: Understanding Llama2: KV Cache, Grouped Query Attention, Rotary Embedding and More
Video 2: Transformer的位置编码(Position Encoding)进展梳理
Blog 3: 二维旋转矩阵与向量旋转
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek “源神”启动!「GitHub 热点速览」
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· 我与微信审核的“相爱相杀”看个人小程序副业
· C# 集成 DeepSeek 模型实现 AI 私有化(本地部署与 API 调用教程)
· spring官宣接入deepseek,真的太香了~