Deploy your Local LLM ChatGPT like Chatbot using HuggingFace Text Generation Inference and Chat-UI

preview_player
Показать описание
In this video, I have shown the steps to deploy and run large language model (LLM) chatbots locally. In this video, I have shown the implementation using SantaCoder 1B model which will run as a Chatbot in helping you to write Python programs. By the end of this video if you follow and execute the steps as mentioned you will have a chatbot similar to ChatGPT.

Also, you can try to deploy your own models. All you need to do is while defining the model name provide the path of the model in your local directory which should be enough for you to make a chatbot on your own model.

For any discussions, you can connect with me via the following social links:

Feel free to join the telegram group for discussions using the following link

The file containing the steps of execution will be available in the following repository:
Рекомендации по теме
Комментарии
Автор

Well done -- thank you for sharing your work -- looking forward to trying this out :)

AI_by_AI_
Автор

Great video!! It would be highly helpful for us if you could share the various kind of resources that you would have prepared upon before making these videos

sathyakrishnanthirunavukka
Автор

At 25:46, after running the command, I am getting the error that package.json not found. How should I resolve this issue ?

mitalij
Автор

Can we use this local GPT on any website as chatbot too??

manishtanwar