Dissertation Defense

Responsible Machine Learning: Fairness and Information Protection

Tongxin Yin
WHERE:
3316 EECS BuildingMap
SHARE:
Tongxin Yin Defense Photo

PASSCODE: JJ0327

 

The rapid growth of machine learning has brought about transformative impacts across diverse societal domains. However, its success introduces complexities and the potential for unintended consequences. This dissertation explores certain ethical aspects of machine learning, specifically focusing on fairness and information protection.

Addressing fairness concerns in machine learning is essential for equity, social justice, and trustworthiness in machine learning technologies. This dissertation emphasizes: 1) further improving the fairness-accuracy trade-off, and 2) highlighting the risks of biases and unfair outcomes that can permeate decision-making processes by examining the issue of long-term fairness. In improving the fairness-accuracy trade-off, this dissertation studies potential improvements while allowing abstention in the system. In examining long-term fairness, it recognizes the strategic adaptations individuals may make in response to fair decisions, potentially reverting initially inequitable outcomes. For information protection, the focus is on the concept of differential privacy and its applications in federated learning systems, proposing ways to further improve the privacy-accuracy trade-off.

 

CHAIR: Professor Mingyan Liu

CO-CHAAIR: Armin Sarabi