Can LLMs be Fooled? Investigating Vulnerabilities in LLMs

preview_player
Показать описание
This paper examines vulnerabilities in Large Language Models, particularly in medical summarization, and proposes mitigation strategies to enhance their security and resilience against data leaks.

Рекомендации по теме