南京大学计算机科学与技术系
软件新技术与产业化协同创新中心
摘 要:
The
past decade has seen the great potential of applying deep neural network (DNN)
based software to safety-critical scenarios, such as autonomous driving.
Similar to traditional software, DNNs could exhibit incorrect behaviours,
caused by hidden defects, leading to severe accidents and losses. Hence, it is
very important for quality and security assurance of deep learning systems,
especially for those applied in safety- and mission-critical scenarios.
In
this talk, I will introduce some of my latest research works in testing and
security analysis on deep learning models. First, I will introduce DeepHunter
(ISSTA'19), a coverage-guided fuzzing framework for testing feedforward neural
networks. Second, I will introduce DeepStellar
(FSE'19), a quantitive
analysis for the stateful neural networks (e.g., RNN). At
last, I will introduce DiffChaser (IJCAI'19), a differential testing
framework for detecting disagreements which are caused by prediction on
different models, frameworks or platforms.
报告人简介:
Xiaofei Xie is a
presidential postdoctoral
fellow
in Nanyang Technological University, Singapore. He received Ph.D,
M.E. and B.E. from Tianjin University. His research mainly focus on program
analysis, loop analysis, traditional software testing and security analysis of
artificial intelligence. He has made some top tier conference/journal papers
relevant to software analysis in ISSTA, FSE, TSE, IJCAI and CCS. In particular, he won two ACM SIGSOFT
Distinguished Paper Awards (FSE’16 and ASE’19).
时间:12月19日(星期四)19:00
地点:计算机科学技术楼233室
|