Machine Unlearning: Addressing Bias, Privacy, and Regulation in LLMs and Multimodal Models

preview_player
Показать описание
Speaker:
Marija Stanojevic,Lead Applied Machine Learning Scientist, EudAImonia Science

Abstract:

This talk discusses machine unlearning for large language models (LLMs) and multimodal models (MMs) handling sensitive data. As these AI models gain traction, ensuring adaptable and ethical practices is paramount, especially in domains handling healthcare, finance, and personal information. Here, we explore the intricacies of machine unlearning dynamics and their impact on bias mitigation, data privacy, legal compliance, and model robustness.

The talk sheds light on recent advancements and seminal research in machine unlearning. Given the growing prevalence of AI regulations and concerns around test data leaks during massive training, machine unlearning emerges as an essential component for ensuring unbiased, compliant, and well-evaluated AI systems. We discuss techniques for identifying unwanted data within models and for removing it while preserving model performance. Additionally, the talk explores methods for evaluating the success of machine unlearning, guaranteeing that the model forgets the targeted data without compromising its overall behavior and performance on other data.

Machine unlearning empowers stakeholders, including customers and data owners, with the ability to withdraw their data and fosters trust in the responsible development and deployment of LLMs and MM models.
Рекомендации по теме