UNCENSORED GPT4 x Alpaca & Vicuna [Local Install + Tutorial]

preview_player
Показать описание


Links:

------------------------------------------------------------------------------------------
:start
pause
goto start

----------------------------------------------------------------------------------------------

#llama #gpt4xalpaca #vacuna
Рекомендации по теме
Комментарии
Автор

holy moly! this is much faster than the previous CPU chat bots! awesome! keep going, open source community!

oliverli
Автор

Thank you so much for creating such a fantastic video! Your attention to detail and clear explanations made it so easy to follow along and learn. I can tell you put a lot of effort and thought into making this, and it definitely paid off. Keep up the amazing work!

mondainthewizard
Автор

Super easy and straight foward tutorial, thank you!

nartrab
Автор

Hey thanks for this video, super cool to see that we can utilize open source models. :D

juanmacias
Автор

A video on an easy way to connect the GPT4-x-Alpaca to a chatbot assistant would be great as well!

rip-emix
Автор

Wow. This thing actually remembers context of your previous prompts. That is a huge benefit. Also, after using this 13b vicuña for bit, its comprehension is pretty incredible. It is bad at math, but it passed the GPT-4 stack items test with flying colors. The 30b alpaca fails that test. Interesting.

jeffwads
Автор

man the files in those repositories change every hour tracking down the ones you used was a challenge but once i did it worked fine. THANK YOU!

stevebruno
Автор

Great video, I was looking for this since long time, I subscribed immediately, please keep going!

ninocrudele
Автор

Thats a pretty good model. From my opinion the strongest you can run locally at the moment

Desaster
Автор

that's the best tutorial on that topic, thanks you very much.

DrWne-WasteLand
Автор

For those who care, the 7B version is uncensored, the 13B one is censored

Karvan
Автор

A very interesting and well-explained video! thank you!!! To speed up my computer, I set it to 8 threads, I asked for an explanation of the "Pythagorean theorem" and after 22 minutes of processing, the answer I got was the "Theorem of the bisector of the isosceles triangle"... I'm a bit disappointed about this... I was hoping for something better.. in the sense of performance but also for the type of response obtained

francescabazzi
Автор

best feeling when you dowload the thing and the contents of the zip look completely different .

why cant anything related to github ever be easy ?

CuteAssTiger
Автор

Is there a way to run the alpaca models on multiple gpus? Is there a Linux based OS that can be setup to run models on multiple gpus. Basically something like HiveOS but for running machine models. This could open up a new service/platform that would greatly accelerate machine learning and it should be open source and open to all.

xlrusa
Автор

can you review some prompts to make it more on the jailbroken side? seems to be more on the censored side than I thought it would be. Thank you for your vids!!

frankebert
Автор

When I try to run the model I get an error.
Does anyone know how to fix this?

error loading model: failed to open ggml-model-q4_1.bin: No such file or directory
llama_init_from_file: failed to load model
main: error: failed to load model 'ggml-model-q4_1.bin'

dimasavin
Автор

This was good and easy to follow video presentation. I tried the three versions of the Vicuna model based on AVX, AVX2 and AVX512 and only the AVX512 version failed to work. I may have one of the newer CPUs in which Intel removed support for this extension.

When I asked a question the response was far more detailed than what I received from ChatGPT and it even sited sources for the research it did. Quite an impressive application.

xzendordigitalartcreations
Автор

Awesome video, I'd love to use these unrestricted models for writing film scripts and stories. We need a Mac version though!

xmartyxcorex
Автор

Thanks for the video. Is it possible to leverage a RTX 4090 as per this model instead of a CPU? If so, HOW?

cleverestx
Автор

How can I force these models to use my GPU instead of the CPU? Also how do I set the parameters when using the CPU that it uses all of CPU and cores?

LoGoBDA