转载:Why machine learning algorithms are hard to tune and how to fix it

Key point

  • We often optimize a Linear combinations of losses and hope to simultaneously reduce both \(L_1\) and \(L_0\) losses, but that this linear combination is actually precarious and treacherous.
  • Authours showed that when the pareto curve is concave, only one of these two losses was considered and this linear combination was valid in convex pareto curve.
  • In fact, We can't figure out the property of the pareto curve.
  • We can reformulate the linear combination losses as a constraint optimization problem, e.g., restricting \(L_0\leq \epsilon\)
  • By doing so, authors gave three possible solutions.

References

posted @ 2021-06-08 10:47  Neo_DH  阅读(63)  评论(0编辑  收藏  举报