Open LLM VTuber - Talk to Any LLM With Hands-Free Voice Locally

preview_player
Показать описание
This video shows how to locally install Open-LLM-VTuber, which lets you talk to any LLM with hands-free voice interaction, voice interruption, Live2D taking face, and long-term memory running locally across platforms.

🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:

Coupon code: FahdMirza

#openllmvtuber #aivtuber

PLEASE FOLLOW ME:

RELATED VIDEOS:

All rights reserved © Fahd Mirza
Рекомендации по теме
Комментарии
Автор

This guy reads my mind when ever i want to do somthing, he postes the explanation.

Pixelo_
Автор

Agh, she needs attitude adjustment lol but she could be a lot of fun, I’d love to play with her, ❤❤❤ thanks again for sharing this with us💕💕

ashleyrenee
Автор

Thanks! one ask. In 4:32 you say that now not need nomic-embbed-text, why? llama have for default? ollama have for default? o not needed for this example? please, you recomanded one for tts with good spanish? thanks!

SonGoku-pcjl
Автор

Ahhh I don’t want my ai to be like her - an all knowing asi with that attitude would be fucking terrifying

jasonthings
Автор

Just found out about this project. I don't think there is a project you haven't covered. Maybe you should do a video just about the projects you haven't covered. Might be shorter.

PhilEhI
Автор

for some reason when I try to run the server I get that there's no module named yaml even though all the requirements are installed

opaquefilm
Автор

video would've been more useful if you showed nvidia-smi output WHILE running the app, most of us don't have your 48G vram to play with.
Even the github repo doesn't show the minimum vram requirements outright..

llucis-v
visit shbcf.ru