Deploy large language model locally | Private LLMs with Langchain and HuggingFace API

preview_player
Показать описание

🔴Privacy has become a big problem when we work with models like ChatGPT.
The transfer of personal data limits many users who prefer to protect their information instead of taking full advantage of language models.

🟢 What if we can protect our data and take advantage of all the value of AI?

🤖In my last video we deployed a local LLM (large language model), ensuring that our information never leaves our computer.

Are we limited by hardware?
Not necessarily!
We explore a second option with HuggingFace and its InferenceAPI service.

📚 Chapters:
00:00 Introduction to the problem
00:40 Run LLM locally with Transformer library (HuggingFace)
02:56 Run LLM locally with Langchain
04:43 HuggingFace API
07:13 HugginFace API with Langchain
08:43 Next steps

☕ To chat or have a coffee:
Рекомендации по теме
Комментарии
Автор

Thanks for the video. I have doubt regarding this, How can i make a llm based tool and deploy it so that others can also install it and use it offlin.

kunalpatil