ChatGPT Data Breach Break Down

preview_player
Показать описание
OpenAi have confirmed they have had a data breach involving a vulnerability inside a open-source dependency Redis. This allowed threat actors to see history from other active users. But this leads to the bigger question, how can we secure ChatGPT.

In this video I explain my position using some interesting data that ChatGPT should be part of all organizations threat landscape and that banning ChatGPT won't help the situation.

Please like and subscribe to the GitGuardian channel :)
Рекомендации по теме
Комментарии
Автор

ChatGTP ftw

Came to throw shade, stayed to enjoy the video
🤣

AntonNosovitsky
Автор

What about GDPR? Kind of a grey area to me how you pull data out already used to train a model. Even if in a locally-run LLM service unless no user data. The non-deterministic nature is non-testable without accountability, and only observable without pulling data out. Kinda goes against the grains of engineering. I learn towards safety first over my own heavy fanboy cognitive bias.

ryanjennings