Socially Responsible Machine Learning: On the Preservation of Individual Privacy and Fairness
This event is free and open to the publicAdd to Google Calendar
Machine learning (ML) techniques have seen significant advances and adoption over the last decade. While their social benefits are enormous, they can also inflict harm if not used with care. This thesis focuses on two critical issues in ML systems: fairness and privacy.
On the privacy front, our goal is to preserve individual privacy while maintaining model accuracy. We illustrate two ideas that can balance an algorithm’s privacy-accuracy tradeoff: (1) reuse intermediate computations to reduce information leakage; (2) improve algorithmic robustness to accommodate more randomness. We introduce several randomized, privacy-preserving algorithms that leverage these ideas in various contexts, which significantly improve the privacy-accuracy tradeoff over existing solutions.
On the fairness front, our goal is to go beyond the static, one-shot setting typically studied in the literature when imposing fairness criteria to remedy biases in ML systems. Instead, we evaluate the long-term impact of (fair) ML decisions when the ML system and the individuals subjected to its decisions form a feedback loop. We illustrate how ML decisions and individual behavior evolve and how imposing common fairness criteria intended to promote fairness may nevertheless lead to undesirable pernicious effects. Aided with such understanding, we also propose mitigating solutions.
Chair: Mingyan Liu