What is adversarial machine learning?

preview_player
Показать описание
In this episode we discuss the basics of adversarial machine learning, or the ability to 'hack' machine learning models.
Рекомендации по теме
Комментарии
Автор

It's always problematic to have a model released for inference without a contiguous fine tuning plan. Especially open source models. It makes all the models deployed identical, and since the model weights are available online for anyone, it's a matter of time until a vulnerability is found and used. So constant fine tuning is absolutely necessary even if it doesn't improve the performance of the model significantly.

abdulrahmanelawady
Автор

Great episode once again! Couple of questions.
The noise an adversary would make if they were to change the image or text to something false compared to a predetermined result the adversary chooses, would it be a big difference? And are there ways to tell on the owners end that manipulation has taken place?
Also, when building out a new ai design, apart from implementation of correct security controls, are there any ways to track issues like hallucinations/data poisoning, or would that not be possible until the audit phase?

wreckreational
Автор

Nothing good can come of this. Nothing at all. /s Have a like and follow.

ijustawannaprivicie