Is GPT4All your new personal ChatGPT?

preview_player
Показать описание

In this video, we are looking at the GPT4ALL model which is an interesting (even though not for commercial use) project of taking a LLaMa model and finetuning with a lot more instruction tasks than Alpaca.

For more tutorials on using LLMs and building Agents, check out my Patreon:

My Links:

Github:
Рекомендации по теме
Комментарии
Автор

I've played around with this myself, and yeah out of all the ones in the wild it's for sure one of the best I've seen, but, it still has a long way to go to compete with ChatGPT. I feel like in the last few months we've kicked off a new AI race similar to the space race of the 60s. Will be wild to see where we are at when it's all said and done.

krystlih
Автор

I have been playing with these for about a week and this is exactly what i needed. I do not want all of the capabilities from Chat GPT. This will work perfectly in DIY home automation environments where i do not necessarily want every trivial request being sent to some company and metered. This is good for privacy and trimming down on billable API requests. Sure my system will also use chat-gpt AI though when i have more advanced needs. Siri and Alexa have never excited me, i do not use them. This however can help take over for most of the basic tasks people use those devices for. This is just one elements in a chain of many small pieces that can work together! ;)

brian
Автор

1:20 "generated with GPT 3.5..."
I don't think OpenAI asked for permission from anyone on the internet to generate their model.

anispinner
Автор

I would like to thank you very much for your content. Thank you for your clear, understandable explanations that even a beginner can understand. Great compliments once again and may many many videos follow. One of your followers :=)

bandui
Автор

I asked it to write a 800-word story and it responded that I wasn't compensating it enough to do the work. I got a good laugh out of that. You can tell it was made by a company. There are plenty of annoying "safeguards". I think the silly response to my request was an unintended side effect of the safeguards.

michaelbone
Автор

Your content is fantastic! would be very grateful if you could make a video on fine-tuning the model with custom data. We are looking forward to seeing it!

yossefdawoad
Автор

This is so nerdgasmic. Thanks a lot for sharing and keep us updated!

ShotterManable
Автор

Having asked both ChatGPT and Bard for a limerick recently, the one generated by GPT4All in this video actually reminds me of what I got out of Bard...

arcum
Автор

I'm old enough to remember the only other time I felt like this about the internet: when Napster was wide open and there were no consequences for "Browse User Folder" and sucking down mp3s for hours.

doug
Автор

See this is why I understand that smaller models are for specific things I'm using their training data along with a ton of windows books to make a model that is for windows commands. I've made a small dataset of my own with specific use cases but when I finish I hope to have a windows oriented chatbot that can run basic commands.

AlienAnthony
Автор

Can't imagine OpenAI can actually stop anyone from using output from ChatGPT to train their own models. They'd then have to somehow prove they didn't do the same with ALL the content of the web.

tarwin
Автор

It's incredible I can put it on are USB drive and run it on every computer without administrator password permission because it is portable

ludovicoprestipino
Автор

I think the term of service of OpenAI only restricts people from making models, that can compete with ChatGPT. So distilled models should be safe to do.

tunafllsh
Автор

I think because of privacy issues, A.I. running locally on our own hardware is when things really get interesting, especially because it will be unfiltered, unlike the online ones which dictate what you can and can't do with them.

So I think the online ones might only have the advantage for a short space of time, maybe less than a decade, because I suspect most of us are going to want a A.I. that's run locally for security, privacy and control reasons, that is also likely the case for governments and companies.

Also, imagine this, eventually, A.I. is going to have a longer term memory, where it builds a profile on you, basically, it gets to know you over weeks, months and years and because of that, likely becomes a better A.I. assistant for you, that's something that would have to be done locally, there is no way most consumers would want to share that much information with a corporation.

This is why I think longer term, the real winner in this A.I. war isn't going to be any of the big companies, but an open approach one, likely open source and one that can run on our own system.

As for how long that takes, it's hard to say, could be a few years, could be decades, it's hard to predict because A.I. is evolving really fast, but I do know one thing, the more useful A.I. becomes, the more issues the public and governments around the world are going to have on the information being harvested from us by so few companies, which for now isn't a problem but A.I. is going to become a lot more useful for us and integrated in a lot of what we do in life, that where the big corporations could have a massive problem with the public and governments.

pauluk
Автор

I think increases in consumer grade hardware and techniques to get models to fit on that hardware will be what pushes the tech forward, in an open sense. I believe Nvidia's next 4090 will have 48GB of VRAM, though at a very high wattage. Seems like they have an interest to keep lower watt, multi-GPU capable, high ram cards in the professional tier for as long as they can. But that monopoly won't last long once the market demand really kicks in.

MarkTarsis
Автор

Wow! Just saw it on LinkedIn and you've already made a video on that 🎉 !

catyung
Автор

I run Wizard-13b 1.2 on my M1 Mac Mini with 16G of RAM. It's quick, useful, and generates long responses. I have a couple of other models, but Wizard typically turns out the best responses.

monki
Автор

How would I train this model? Forgive the noob question, this is the first video of yours I've seen and the back-catalogue is a bit overwhelming^^

NickEnlowe
Автор

I noticed in your previous video that the colab memory you had was ~80gb...but the max I can seem to get is 52gb. How are you getting higher than that? All the documentation says 52gb is the max. Thanks!

matthew_berman
Автор

In a "raw" form, not fine-tuned, 30B llama model is orders of magnitude better than 7B or even 13B, but due to costs involved, it might take a while for us to fine-tune it.

gaiusbc