Create a GPT4ALL Voice Assistant in 10 minutes

preview_player
Показать описание
Use Python to code a local GPT voice assistant. In this video we learn how to run OpenAI Whisper without internet connection, background voice detection in Python and implementing the GPT4ALL Python library to access any language model on GPT4ALL through your Python programs.

Help me make more videos:

#python #ai #programming
Рекомендации по теме
Комментарии
Автор

Should I make a tutorial on how I make my videos ?

Ai_Austin
Автор

Damn, this is really cool. Could easily enable API on a networked computer (dedicated for AI) and just take this same concept and build either a WebApp or Android/iOS app and you have a personal assistant like Alexa, Google, or Siri, but better... and way more customizable. AI is so freaking awesome and we are still in the early days, amazing. Thanks!

michealkinney
Автор

Perfect, I've been in dire need for a good simple voice listening python code. I can utilize this to with a openai assistant api to generate data a lot easier than having to have typed everything out on a keyboard. I'm working on making a dataset for a 1B LLM model to run my own smarthome alexa alternative and this is perfect.

AlienAnthony
Автор

This is great ... I'm going to use it as a base template, I want to add chat context management and basic tool calls.

saxtant
Автор

install portaudio before installing PyAudio, not after, as in the video, otherwise you get an error. Great video, keep up the good work!

TheDvscg
Автор

"I don't know why they thought it was a good idea to program this shit into this library, but we will modify the library to hack around this inconvenience" @ 4:35
Caught me off guard tbh but this was the best part, lol.

michealkinney
Автор

this is what i wanted. definitely doing this soon. as soon as my broken shoulder heals because its pretty crap to use the rn.
kinda want to use it with tortoise but i guess i can use any tts system and just give it an RVC pass

GraveUypo
Автор

are you still able to load the whisper model file directly into the terminal? I keep getting path not found, trying to find the right cache location or if it works

bosephofficial
Автор

Wow, this is great! Would love to see the prompts closely. Please do a tutorial on it, thanks!

freddy
Автор

Hi, Everything went well except I couldn't retreive the model. Please help

ara_vind_.
Автор

Wow dude, how hard was it to figure all those steps out? Can you make a video about the System Prompts and how to use them in the most powerfull way or how we get gpt4all to work only with our local documents?

iOMNetwork
Автор

How would you compare this with your bard voice assistant?
Can this give answer based on previous prompts?

notalanjoseph
Автор

Bro in your previous video of bard voice assistant I'm facing error which is NoneType object is not subscriptable help me please 😭

TruthSeeKingg
Автор

Can you build a requirements.txt and .md file for your instructionals?

glorified
Автор

Can you do a video on animated an AI voice assistant giving them a body and face?

GrayOperative
Автор

I followed your guide and the code seems fine, but when prompted to say my wake word, it does nothing... my microphone is working, but is there a setting in VS Code I need to enable input/output audio?

NebMediaUK
Автор

Great video! Just one question, can the assistant only run when we open python? Unlike siri or alexa which can run even when our pc or phone is off?

Samuel_Nicole
Автор

Could it be possible to use a RVC based tts service for custom voices with this? I cloned Jarvis voice from the movies and i wanna use it as my personal assistant voice.

jesusjodarpiernas
Автор

Wonderful project. Thanks a lot. It works so nicely. Now can I run it on Raspberry Pi4 B 8G?

PatnaikUC
Автор

Hi Austin, I took your code and went another route, instead of having the voice assistante I am trying to create a local instance of gtp4all on a local server and then have my home devices (esp32 with sensors) have communication with this instance of Ai, so that it can take decisions based on what they report (sample project of a Ai autonomous small greenhouse).
I have the core of the project working, I am stuck however in the keeping context part. Could you make a video about this? I can have GTP4All in cli and start a conversation with it via node js but the conversation wont keep context. The final idea is to have a express JS api to have each device talk to GTP4ALL

PedroSilva-tevl