filmov
tv
Can LLMs be Fooled? Investigating Vulnerabilities in LLMs
Показать описание
This paper examines vulnerabilities in Large Language Models, particularly in medical summarization, and proposes mitigation strategies to enhance their security and resilience against data leaks.