Optimization: Introduction

CanChen ggchen@mail.ustc.edu.cn


Recently I am following Optimization lectured by Prof.Zhouwang Yang at USTC. Opt lies at the heart of ML.
Why I choose his course? I am familiar with USTC courses and have a motive inside myself to acquire every knowledge in USTC courses' slides.
Now I have already followed most of this course and so I decide to write some blogs chapter by chapter to deepen my understanding and record my learning process in the meantime.
The first blog is introduction.

 

Definition

We use x to denote decision variable, S to denote feasible region, f to denote objective function.
Then optimization is to minimize f(x) in the scope S.
If S is available in the whole space, then this is an unconstrained optimization problem.
Otherwise, this is a linear programming problem if S only contains linear constraints and an non-linear programming problem if S contains nonlinear constraints.

 

Opt conditions

 

Unconstrain

The condition is very simple and we just need to know its first-order and/or second-order derivates if they exist.

 

Equation Constraint

Lagrange mutiplier. One thing that confused me when I first saw the lagrange equation is that the equation's derivate on the mutiplier is zero. Then I realied it is the original equation constraint in fact.

 

Inequation Constraint

Lagrange KKT. The key to understand KKT is determine whether inequation becomes equation when it meets the optimal point.

 

posted @ 2020-05-05 08:32  Klaus-Chen  阅读(121)  评论(0编辑  收藏  举报