摘要: ## Abstract 背景:目前大多数的adversarial attack method on pre-trained models of code忽略了perturbations should be natural to human judges(naturalness requirement 阅读全文
posted @ 2023-09-06 23:34 雪溯 阅读(15) 评论(0) 推荐(0) 编辑
摘要: ## Abstract 背景:已有的方法(Muffin, Lemon, Cradle) can cover at most 34.1% layer inputs, 25.9% layer parameter values, and 15.6% layer sequences. 本文:COMET Gi 阅读全文
posted @ 2023-09-06 23:03 雪溯 阅读(19) 评论(0) 推荐(0) 编辑
摘要: ## Abstract 本文:IvySyn Task: discover memory error vulnerabilities in DL frameworks BugType: memory safety errors, fatal runtime errors Method: 1. 利用na 阅读全文
posted @ 2023-09-06 21:34 雪溯 阅读(16) 评论(0) 推荐(0) 编辑