Dr. Xintao Wu is the professor and the Charles D. Morgan/Acxiom Endowed Graduate Research Chair in Database and leads Social Awareness and Intelligent Learning (SAIL) Lab in Computer Science and Computer Engineering Department at the University of Arkansas. He was a faculty member in College of Computing and Informatics at the University of North Carolina at Charlotte from 2001 to 2014. He got his BS degree in Information Science from the University of Science and Technology of China in 1994, ME degree in Computer Engineering from the Chinese Academy of Space Technology in 1997, and Ph.D. in Information Technology from George Mason University in 2001. Dr. Wu's major research interests include data mining, privacy and security, fairness aware learning, and big data analysis. Dr. Wu has published over 150 scholarly papers and served on editorial boards of several international journals and many program committees of top international conferences in data mining and AI. Dr. Wu is also a recipient of NSF CAREER Award (2006) and several paper awards including PAKDD'13 Best Application Paper Award, BIBM'13 Best Paper Award, CNS'19 Best Paper Award, and PAKDD'19 Most Influential Paper Award.
Speech title: Robust Machine Learning under Distribution Shift and Adversarial Attack
As big data and AI technologies are deployed to make critical decisions that potentially affect individuals (e.g., employment, college admissions, credit, and health insurance), there are increasing concerns from the public on privacy, fairness, safety, and robustness issues of data analytics, collection, sharing and decision making. In this talk, we first overview our social awareness research, in particular, on how to mitigate side effect of enforcing one social concern on another, and how to address multiple social concerns simultaneously. We then focus on robustness of machine learning under two representative scenarios, distribution shift and adversarial attack. In the former scenario, we present robust learning based on kernel reweighing and Heckman/Green bias correction models. In the second scenario, we present adaptive defense that purposely leverages multiple types of adversarial samples to learn the context information in the training. We conclude the talk with some future research directions in trustworthy AI.