Dr. Takeshi Takahashi
The Cybersecurity Laboratory, Cybersecurity Research Institute, National Institute of Information and Communications Technology (NICT), Japan.
Personal page: http://www.taketaka.com/
Speech Title: Toward automated cybersecurity: Visualization and machine learning techniques empower cybersecurity
Abstract: Recently, many cyber incidents have been reported, and the importance of cybersecurity is increasing for maintaining and developing cyber society. However, we are in short of human resources, especially those with sufficient skills. To cope with this situation, cybersecurity operations need to be automated or semi-automated.
In this talk, several automation efforts will be presented. The first half of my talk is about automation efforts based on heuristics and operators’ domain knowledge. Those efforts aim at facilitating the work of operators. Three of these will be introduced during the talk, i.e., our darknet analysis system that provide assistance on the analysis of global malware activities, our darknet-based alerting system that provides security alerts to those organizations that host infected computers, and our livenet monitoring system that integrates various security appliance and provide aggregated security alerts to operators along with the method to implement immediate countermeasures. The second half of my talk is about our automation efforts based on machine learning techniques. Those efforts aim at facilitating the work of analysts. Three of these will be introduced during the talk, i.e., a livenet traffic analysis technique that minimizes the volume of security alerts that require investigations, a malware analysis technique that automatically analyze the functions of malware samples, and a traffic analysis technique that detects malware activities as early as possible to minimize the number of infected devices. We these efforts, we contribute to the development of automated cybersecurity techniques.
Bibliography: Dr. Takeshi Takahashi received the Ph.D. degree in telecommunication from Waseda University, in 2005. He was with the Tampere University of Technology as a Researcher from 2002 to 2004, and Roland Berger Ltd., as a Business Consultant, from 2005 to 2009. Since 2009, he has been with the National Institute of Information and Communications Technology, where he is currently a Researcher Manager. His research interests include cybersecurity and machine learning techniques. He is a member of the Association for Computing Machinery, and the Institute of Electronics, Information and Communication Engineers. He serves as the IETF MILE Working Group chair and ITU-T Q.6/17 Associate Rapporteur.
Dr. Truyen Tran
Personal page: https://truyentran.github.io/
Speech Title: Machines that learn to talk about what they see
Abstract: Modern AI has achieved many groundbreaking successes. However, there are problems that are relatively easy for human but have proved to be very challenging for machine. One of such problems is building a machine that learns to reason on a dynamic scene and talks about it. As it turns out, the main workhorse of modern AI, deep learning while effective in learning an one-step mapping from an input to an output in a fixed form, has been very limited when facing new forms and multi-step inference, for example, answering unfamiliar compositional questions.
In this talk, I present our research program on learning to reason visually. We aim to design and improve a mental faculty that produces new knowledge from the previously acquired visual knowledge in response to a natural question. This task therefore sits at the intersection of four separate AI subfields: computer vision, natural language processing, machine learning and machine reasoning. A powerful demonstration of such a capability is answering unseen natural questions about an image or a video, especially when in a multi-turn dialog. In this setting, the task of visual question answering boils down to learning to acquire and manipulate visual knowledge distributed through space-time and conditioned on the compositional linguistic cues. Our approach is based on bridging the gap between symbol grounding and symbolic reasoning through neural networks.
I also discuss several on-going projects under this research program. One project is to design a dual-process of reasoning, which consists of a reactive visual representation module interacting with a deliberative reasoning module. The second project aims to construct a dynamic relational working memory that temporarily links and refines visual concepts related to the question as reasoning proceeds. The third project is to invent a Universal Neural Turing Machine capable of constructing neural programs on-the-fly in response to a query.
Bibliography: Dr. Truyen Tran is Associate Professor at Deakin University where he leads a research team on deep learning and its applications to computer vision, computational science, biomedicine and software analytics. Tran has received multiple recognition, awards and prizes including Best Paper Runner Up at UAI (2009), Geelong Tech Award (2013), CRESP Best Paper of the Year (2014), Third Prize on Kaggle Galaxy-Zoo Challenge (2014), Title of Kaggle Master (2014), Best Student Papers Runner Up at PAKDD (2015) and ADMA (2016), and Distinguished Paper at ACM SIGSOFT (2015). He obtained a Bachelor of Science from University of Melbourne and a PhD in Computer Science from Curtin University in 2001 and 2008, respectively.