While machine learning-based techniques have been widely applied in security domains, being able to explain the rationale behind their decision making process remains as a largely open problem. Recent techniques on interpreting decision making of neural networks either provide local explanation for each input instance or approximate the original model based on a set of input-output instances. The quality of explanation provided by these techniques is limited by the scope of inputs used to generate approximated models or explanations. However, the inherent nature of security research requires us to understand the intrinsic characteristics of a neural network model instead of just parts of model behaviors.
In this talk, I will first introduce REINAM as an example of applying machine learning technique for security research purpose. REINAM is a reinforcement-learning approach for synthesizing probabilistic context-free program input grammars without any seed inputs. Then, I will introduce DENAS, a novel input-independent neural-network explanation approach dedicated for security applications. DENAS is capable of efficiently generating decision rules which could interpret the decision making of a neural network without providing any input. Finally, I will briefly introduce iRuler, an IoT analysis framework that leverages Satisfiability Modulo Theories (SMT) solving and model checking to discover inter-rule vulnerabilities.