Talk and Receive Answers from LLMs (Llama) Locally in Real-Time - Open-LLM-VTuber - INSTALL LOCALLY

preview_player
Показать описание
#llama3.1 #ollama #llm #machinelearning #python
It takes a significant amount of time and energy to create these free video tutorials. You can support my efforts in this way:
- You Can also press the Thanks YouTube Dollar button

In this tutorial, we explain how to install and use Open-LLM-VTuber locally. Open-LLM-VTuber is an amazing open-source LLM interface that enables you to talk and receive answers from LLM in real time. In this particular tutorial, we explain how to talk with Ollama and Llama 3.1 in real time and how to animate the answers.

Tutorial on how to install Anaconda in Linux Ubuntu:

Tutorial on how to install Ollama and Llama 3.1 in Linux Ubuntu:

The original project is given here:
Рекомендации по теме
Комментарии
Автор

It takes a significant amount of time and energy to create these free video tutorials. You can support my efforts in this way:
- You Can also press the Thanks YouTube Dollar button

aleksandarhaber
Автор

Would be great to have something similar but with 3d characters (as in, non anime looking character)

frosti
Автор

Thank you! this might help me started making my own ai assistant something very similar to that i guess but more personnalized. I was actually able to make it and it worked very well with gpt 3.5 online but now I am trying to switch to llama 3.1 8B locally an everything is messy and very very slow i cant even tell if it’s not working or if it’s just very slow (my gpu run actively so i think it’s just slow) anyway trying to find a solution maybe switch to an older version. My gpu is a 3070

yereem