Data Privacy for LLMs

preview_player
Показать описание
Many teams are thinking about how to use LLMs while upholding their standards for data privacy. We talked with DeepsenseAI and Opaque about how.

Рекомендации по теме
Комментарии
Автор

Thanks, guys very very useful! That's what we will need to implement in our LLM application

pavellegkodymov
Автор

This is a disaster waiting to happen for anything serious.
How confident are you in your ability to mask sensitive data? What if someone writes, idk, their name l-i-k-e t-h-i-s? What if the sensitive information does not conform to any pattern, is comprised of everyday words or is context dependent?
Want to keep your data safe? Host your own models. Treat embeddings as confidential. Treat logs as confidential or don't log at all. Don't train models on confidential data if you intend to expose the model to the users, unless you're prepared to hand-review *all* training data.

michaflak