The 21st International Conference on Formal Engineering Methods

ICFEM 2019

November 5th-9th, 2019, Shenzhen, China

International Workshop on Artificial Intelligence and Formal Methods (AI&FM 2019)

Nov. 5th, 2019

Invited Talks


A Modeling Framework of Intelligent Transportation Systems with Driver Behavior Classification based on Machine Learning
Dongdong An, East China Normal University
    Abstract: ITSs are attracting much attention from the industry, academia, and government in staging the new generation of transportation. In the coming years, the interaction of human drive vehicles and autonomous vehicles would co-exist for a long time in an uncertain environment. How to efficiently control the autonomous vehicle and improve the interaction accuracy as well as the human driver safety is a hot topic for the autonomous vehicle to solve. The safety-critical nature of the ITSs demand providing provably correct guarantees about the actions, models, control, and performance. To modeling and recognizing driver behavior, we model and classify the human driver behavior in ITSs through machine learning classification algorithms based on the data we get from the uncertain environment. We introduce a run-time verification method to help the autonomous vehicle to make a "more intelligent" decision in the uncertain environment. We define a parameterized modeling language called stohChart(p) (parameter stochastic hybrid Statecharts) to describe the behavior and interaction of agents in ITSs. The learning result of the driver behavior classification is transferred to stohChart(p) as the parameter timely. Then we propose a mapping algorithm to transform stohChart(p) to NPTA (Networks of Probabilistic Timed Automata) and use statistical model checker UPPAAL-SMC to verify the quantitative properties. We illustrate our proposition by modeling and analyzing a scenario of the autonomous vehicle change to the lane where a human drive vehicle occupied.
← Back


AI Ethics and Ethics by Design
Jeff Cao, Tencent Research Institute
    Abstract: With the development and application of artificial intelligence (AI), we are now entering into an era of intelligence, ubiquitous data and algorithms are giving birth to new forms of economy and society powered by AI. Meanwhile, problems of privacy protection, disinformation, cyber security, cybercrime, and overuse of electronic products have become the focus of the global attention. The ethics of AI aiming at correcting technology and adjusting interest have come on the scene. Just as the renowned AI scientist Fei-Fei Li once mentioned, it is time to make ethics an fundamental part of AI research and development. In such context, from governments to industries and to academic circles, a wave of putting forward ethical principles for AI is set off at global scale, because the healthy development of AI should be guaranteed by ethical thinking and ethical safeguards.
← Back


Inductive Bias and Meta Learning: A New Perspective of Formal Methods Recommendation
Weipeng Cao, Shenzhen University
    Abstract: No inductive bias, no machine learning. Different inductive biases determine the different characteristics of machine learning algorithms. In other words, inductive bias determines the application scenario of a specific machine learning algorithm. Given a problem that needs to be modeled, how to quickly select the appropriate algorithm is one of the research hotspots in the field of machine learning. Meta-learning provides a solution to this problem. In this talk, I will introduce the concepts of inductive bias and meta learning, hoping to provide a new perspective for the recommendation of formal methods.
← Back


The Key Challenges of RAMS: When Huawei Full Stack and All Scenarios AI comes
Zheng Hu, Huawei Technologies Co., Ltd.
    Abstract: Artificial Intelligence has achieved significant progress in the last 5 years. Unveiled at Huawei Connect 2018, Huawei’s new vision also calls for a smart world powered by artificial intelligence. From the chipset to the high-level developer APIs, Huawei has built all the layers of an AI stack, enabling all the end to end scenarios from smartphone to public cloud as well as 5G IoT etc. Under the growing trend of deploying AI product and services, how to build reliable, high-available, maintainable and safe AI products/services has become a key subject which has great challenges. This talk will present some critical topics for each layer in the AI stack and the main scenarios as well. We will share in the meanwhile our vision of the technical direction in this field.
← Back


Defining Robustness in the Input Space Using Linear Regions
Jay Hoon Jung and YoungMin Kwon, State University of New York at Korea
    Abstract: A Piece-wise Linear Neural Network (PLNN) is a deep neural network composed of only Rectified Linear Units (ReLU) activation function. Interestingly, even though PLNNs are a nonlinear system in general, we show that PLNNs can be expressed in terms of linear constraints because ReLU function is a piece-wise linear function. We proved that all components of linear regions are continuous at the boundary between two different partitions. The continuity implies that the boundary corresponding to a node itself should be continuous regardless of the partitions. Furthermore, we also obtained the boundaries of a single-class region, which has the same predicted classes in the interior of the region. We suggested that the robustness of Neural Networks (NNs) can be verified by investigating the feasible region of these constraints.
← Back


Robustness Verification of Classification Deep Neural Networks via Linear Programming
Wang Lin, Zhejiang Sci-Tech University
    Abstract: There is a pressing need to verify robustness of classification deep neural networks (CDNNs) as they are embedded in many safety-critical applications. Existing robustness verification approaches rely on computing the over-approximation of the output set, and can hardly scale up to practical CDNNs, as the result of error accumulation accompanied with approximation. In this paper, we develop a novel method for robustness verification of CDNNs with sigmoid activation functions. It converts the robustness verification problem into an equivalent problem of inspecting the most suspected point in the input region which constitutes a nonlinear optimization problem. To make it amenable, by relaxing the nonlinear constraints into the linear inclusions, it is further refined as a linear programming problem. We conduct comparison experiments on a few CDNNs trained for classifying images in some state-of-the-art benchmarks, showing our advantages of precision and scalability that enable effective verification of practical CDNNs.
← Back


Safe Inputs Approximation for Black-Box Systems
Bai Xue, Chinese Academy of Sciences
    Abstract: Given a family of independent and identically distributed samples extracted from the input region and their corresponding outputs, in this paper we propose a method to under-approximate the set of safe inputs that lead the black-box system to respect a given safety specification. Our method falls within the framework of probably approximately correct (PAC) learning. The computed under-approximation comes with statistical soundness provided by the underlying PAC learning process. Such a set, which we call a PAC under-approximation, is obtained by computing a PAC model of the black-box system with respect to the specified safety specification. In our method, the PAC model is computed based on the scenario approach, which encodes as a linear program. The linear program is constructed based on the given family of input samples and their corresponding outputs. The size of the linear program does not depend on the dimensions of the state space of the black-box system, thus providing scalability. Moreover, the linear program does not depend on the internal mechanism of the black-box system, thus being applicable to systems that existing methods are not capable of dealing with. Some case studies demonstrate these properties, general performance and usefulness of our approach.
← Back


Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification
Pengfei Yang, Chinese Academy of Sciences
    Abstract: Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs, which has led to safety concerns on applying DNNs to safety-critical domains. Several verifica- tion approaches have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from either the scalabil- ity problem, i.e., only small DNNs can be handled, or the precision problem, i.e., the obtained bounds are loose. This paper improves on a recent proposal of an- alyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. More specifically, the activation values of neu- rons are represented symbolically and propagated forwardly from the input layer to the output layer, on top of abstract domains. We show that our approach can achieve significantly higher precision and thus can prove more properties than us- ing only abstract domains. Moreover, we show that the bounds derived from our approach on the hidden neurons, when applied to a state-of-the-art SMT based verification tool, can improve its performance. We implement our approach into a software tool and validate it over a few DNNs trained on benchmark datasets such as MNIST, etc.
← Back