Focal Loss 的前向与后向公式推导

把Focal Loss的前向和后向进行数学化描述。本文的公式可能数学公式比较多。本文尽量采用分解的方式一步一步的推倒。达到能易懂的目的。

Focal Loss 前向计算


Loss(x, class) = -\alpha_{class}(1-\frac {e^{x[class]}} {\sum_j e^{x[j]}} )^\gamma \log{(\frac {e^{x[class]}} {\sum_j e^{x[j]}} )} (1)

其中 x 是输入的数据 class 是输入的标签。

 = \alpha_{class}(1-\frac {e^{x[class]}} {\sum_j e^{x[j]}} )^\gamma \cdot (-x[class] + \log{\sum_j e^{x[j]}}) (2)

= -\alpha_{class}(1- softmax(x)[class] )^\gamma \cdot \log\big(softmax(x)[class]\big) (3)

其中 softmax(x) = \frac {e^{x[class]}} {\sum_j e^{x[j]}} = p_{class}

 

Focal Loss 后向梯度计算


为了计算前向公式(3)的梯度我们,首先计算单元 \log p_t 的导数。

\begin {aligned} \frac{\partial}{\partial x_i} \log p_t & =\frac{1}{p_t}\cdot\frac{\partial p_t}{\partial x_i} \\\ &=\frac{1}{p_t}\cdot\frac{\partial}{\partial x_i} \frac{e^{x_t}} {\sum_j{e^{x_j}}} \\\ & = \begin {cases} \frac{1}{p_t}\cdot(p_t-p_t^2) = 1-p_t, & i=t \\\ \frac{1}{p_t}\cdot(-p_i \cdot p_t) = -p_i, & i \neq t \end {cases} \end {aligned} (4)

 

计算计算 p_t 导数:

\begin {aligned} \frac{\partial}{\partial x_i}p_t & = \frac{\partial}{\partial x_i} \frac{e^{x_t}}{\sum_j{e^{x_j}}} \\\ & = \begin {cases} \frac{e^{x_t}\cdot \sum_j{e^{x_j}} - e^{x_t}\cdot e^{x_t}}{\sum_j{e^{x_j}} \cdot \sum_j{e^{x_j}}} = p_t -p_t^2 & i=t \\\ \frac{-e^{x_t}\cdot e^{x_i}}{\sum_j{e^{x_j}} \cdot \sum_j{e^{x_j}}} = -p_i \cdot p_t, &i \neq t \end {cases} \end {aligned}(5)

 

有了(4)和(5)我们就来对(3)进行推倒。

\because \begin {aligned} FL(x, t) &= -(1-p_t)^{\gamma}\log{p_t} \end {aligned}

\begin {aligned} \therefore \frac{\partial{FL(x, t)}}{\partial x_i} &= -\gamma(1-p_t)^{\gamma-1} \cdot \frac {\partial (-p_t)} {\partial x_i}\cdot \log p_t-(1-p_t)^\gamma \cdot \frac {\partial \log p_t}{\partial x_i} \\\ & = \gamma(1-p_t)^{\gamma-1} \cdot \log p_t \cdot \frac {\partial (p_t)} {\partial x_i}-(1-p_t)^\gamma \cdot \frac {\partial \log p_t}{\partial x_i} \\\ &= \begin {cases} \gamma(1-p_t)^{\gamma-1} \cdot \log p_t \cdot (p_t-p_t^2)-(1-p_t)^\gamma \cdot (1-p_t), & i=t \\\ \gamma(1-p_t)^{\gamma-1} \cdot \log p_t \cdot (-p_i\cdot p_t)-(1-p_t)^\gamma \cdot (-p_i), & i \neq t \end {cases} \end {aligned} (6)

在(6)中把(4)(5)带入并合并整理就得到(7)

\therefore \frac{\partial{FL(x, t)}}{\partial x_i} = \begin {cases} -(1-p_t)^{\gamma}\cdot(1-p_t-\gamma p_t\log p_t),&i = t \\\ p_i\cdot(1-p_t)^{\gamma - 1 } \cdot(1-p_t-\gamma p_t \log p_t),&i\neq t \end {cases} (7)

 

(7)就是Focal loss的后向的最后结果。要是在TF, Pytorch等中实现Focal Loss 即可采用(7)实现backward。

posted @ 2019-04-04 16:25  杨国峰  阅读(846)  评论(0编辑  收藏  举报