NICS 2018
November 23 – 24, 2018, Ho Chi Minh City, Vietnam

Keynote Speakers

Ashwin Ittoo

University of Liège in Belgium


Dr. Ashwin Ittoo is a professor of Information Systems (Analytics/NLP/Machine Learning) at the University of Liège in Belgium. His main research area is in NLP, specifically, on minimally-supervised or unsupervised algorithms for semantic relation extraction. His research terms develop various machine learning and sophisticated econometrics methods, such including Deep Learning, Lasso and Ridge regressions, which are applied to diverse domains, such as finance and marketing.

Among his other activities, Prof. Ittoo is an Associate Editor (NLP, Machine Learning) of the Elsevier Journal, Computers in Industry, and has served as guest-editor for several special issues of the Elsevier Journal, Data and Knowledge Engineering. In addition, he has served/serves as Programme Committee Chair and Organization Chair of numerous conferences in the past.

He obtained his PhD in 2012 from the University of Groningen, The Netherlands, and his masters and bachelors degree from the Nanyang Technological University, Singapore and the National University of Singapore.

Web site:


AI & Law

The topic of my talk is one that has only recently started to attract the interest of scientists and regulatory and governmental authorities, namely, that of Artificial Intelligence (AI) and Law. In particular, I will focus on two sub-domains of the law, which have been/will be most impacted by the emergence of increasingly sophisticated AI technologies.

The first sub-domain is that of competition law. I will describe a recent project in which we are investigating whether pricing agents, based on deep reinforcement learning can participate in tacit collusion, i.e. whether they can form cartels, just as humans would do. I will present various game-theoretic settings, which enable us to study the phenomenon of algorithmic tacit collusion.

The second domain is that of anti-discrimination law. In certain states of the US, algorithms are being deployed to predict the recidivism risk of defendants. These algorithms are trained to make predictions based on past data. However, studies by human-rights groups have shown that these data are inherently biased - they contain more instances of certain race/profile. Consequently, algorithms trained on these data, also suffer from the bias effect. The question here is how to remove bias during training?

The 5th IEEE - NAFOSTED Conference on Information and Computer Science (NICS) 2018