nkAmerica

导航

neural network robustness verification

There have been several approaches available. One line of research I focused on is abstract interpretation based approaches.

AI2: uses zonotope as the abstract domain, and leverages the 'join' and 'meet' operators of zonotopes to handle ReLU activation function. Other activation funcations, such as sigmoid, tanh, are not supported.

DeepZ: uses zonotope as the abstract domain. It handles the activation function by introducing at most one noise symbol and use parallelagram to approximate its behavior. In principle, it uses two parallel lines to contain the activation function.

DeepPoly: introduces a new abstract domain, [lexpr, uexpr, lbound, ubound]. In theory, it can be viewed as an extention of DeepZ. It uses two arbitrary lines to approximate the activation function. Moreover, each abstraction (abstract element) contains all the abstractions in the preceding layers. This is used to tighten the bound, nothing more. The writting is very tricky, although the domain is non-exact regarding the affine transformer, [lexpr, uexpr] is exact. That's why they raise up an invariant for sound proof.

StarSet: although it seems like a set method, it is equivalent to abstract interpretation based techniques. It is better than DeepPoly as it allows more than 2 lines for approximation. I would expect it may encounter scalable issues, but I can not confirm yet. In the evaluation, it is more effective than deeppoly, but less efficient than deeppoly. The reasons I could think of are, 1) deeppoly employs parallelism in elina; 2) startset is implemented using Matlab.

RefineZono: not a good work. It employs SMT technique to tight the bound.

RefinePoly: k-relu. Not read it yet.

 

If we can adaptively manipulate the abstraction, it would be more powerful.

posted on 2020-07-09 23:41  nkAmerica  阅读(105)  评论(0编辑  收藏  举报