内容

PRODeep:A Platform for Robustness Verification of Deep Neural Networks

阅读数:21    发布:2020-07-02 12:32    

报告人:张立军教授(中科院软件所)

报告时间:2020年7月4日 星期六 20:00-21:00

报告地点:腾讯会议ID:991 217 002

报告摘要:Deep neural networks (DNNs) have been applied in safety-critical domains such as self driving cars, aircraft collision avoidance systems, malware detection, etc. In such scenarios, it is important to give a safety guarantee to the robustness property, namely that outputs are invariant under small perturbations on the inputs. For this purpose, several algorithms and tools have been developed recently. In this talk, we present 

PRODeep, a platform for robustness verification of DNNs. PRODeep incorporates constraint-based, abstraction-based, and optimisation-based robustness checking algorithms. It has a modular architecture, enabling easy comparison of different algorithms. With experimental results, we illustrate the use of the tool, and easy combination of those techniques.

报告人简介: Lijun Zhang is a research professor at State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences, and a visiting professor at Shenzhen University. Before this he was an associate professor at Language-Based Technology section, DTU Compute, Technical University of Denmark. Before this he was a postdoctoral researcher at University of Oxford. He gained a Diploma Degree and a PhD (Dr. Ing.) at Saarlandes University. His research interests include: probabilistic models, simulation reduction, decision algorithms for probabilistic simulation preorders, abstraction and model checking. Recently, he is working in combining automata learning techniques with model checking. He is leading the development of the model checker IscasMC.

深圳大学计算机与软件学院 2009-2016