Unleash the power of Local LLM's with Ollama x AnythingLLM

preview_player
Показать описание
Running local LLMS for inferencing, character building, private chats, or just custom documents has been all the rage, but it isn't easy for the layperson.

Today, with only a single laptop, no GPU, and two free applications you can get a fully private Local LLM RAG chatbot running in less than 5 minutes!

This is no joke - the teams at Ollama and AnythingLLM are now fully compatible, meaning that the sky is the limit. Run models like LLama-2, Mistral, CodeLLama, and more to make your dreams a reality with only a CPU.

And you are off to the races. Have fun!

Chapters
0:00 Introduction to Ollama x AnythingLLM on a laptop
0:36 Introduction to Ollama
1:11 Technical limitations
1:48 Ollama Windows is coming soon!
2:11 Let’s get started already!
2:17 Install Ollama
2:25 Ollama model selection
2:41 Running your first model
3:33 Running the Llama-2 Model by Meta
3:57 Sending our first Local LLM chat!
4:53 Giving Ollama superpowers with AnythingLLM
5:31 Connecting Ollama to AnythingLLM
6:45 AnythingLLM express setup details
7:28 Create your AnythingLLM workspace
7:45 Embedding custom documents for RAG for Ollama
8:22 Advanced settings for AnythingLLM
8:53 Sending a chat to Ollama with full RAG capabilities
9:30 Closing thoughts and considerations

#opensource #llm #privategpt #localagent #chatbot #ragchatbot #rag #openai #gpt #customgpt #localai #ollama #freechatbot #aitools #aitoolsyouneed #aitoolsforbusiness #freeaitool #freeaitools #llama2 #mistral #langchain #tutorial #aitutorial #aitools2024 #aiforbeginners #aiforproductivity
Рекомендации по теме
Комментарии
Автор

Thank you for your hard work!
This is really a game changer, now people can build they personal chatbots, with massive databases and knowledge about they favorite topics, without using or paying online services.
This is my new favorite piece of software together with LM studio.
Huge respect and keep up with good work.

JohnRiley-rj
Автор

Amazing insight! I was already using Ollama, and adding AnythingLLM is the icing on the cake 👍. Thanks for the video!

karkonda
Автор

Ollama is now officially on windowssss!!!!

HistoryIsAbsurd
Автор

The only video I found that gave me exactly what I was looking for...THANKS!

LakerTriangle
Автор

Great stuff, will be putting it on my windows machine...can't wait till a linux/Ubutu Appimage is released.

GregRutkowski
Автор

I love AI, and I'm dabbling in all kinds of things, but I could never get LLMs to really work on my local machine. THIS is amazing. I got it working on my gaming laptop.

DragoMir-lccr
Автор

This is great, and nice to see Linux support is there now as well.

sebastiaanstoffels
Автор

This is mind blowing! Thank you, much appreciated!

NurtureNestTube
Автор

what about agents? do you plan to integrate agents? with crewai, or autpgpt, also why only ollama, you can integrate llm studio as well! sounds like a promising project.

unimposings
Автор

Perfect video! It has changed a bit though, so you don't need to pick lance etc, it just shows it as chosen.

_TheDudeAbides_
Автор

This is exactly what I was looking for! :)

Linguisticsfreak
Автор

Speaking of Privacy, why you skipped telling people to turn off "Anonymous Telemetry Enabled" ??

userrjlyjg
Автор

Excellent work🎉, Offline- RAG-Opensource!!! Very useful, working well in my pc. at 5:46 when I type & select ollama, base url is detected automatically 👍 thank you for this master piece Anything LLM👏👏

mansurabdulkadar
Автор

Great video thanks for developing this

rembautimes
Автор

Hello, anything in the pipeline for using the new ollama embeddings? they are super super fast :-)

fuba
Автор

Hi, have to thank you for the tutorial. It is very useful to quick test different LLM models, vector databases and other points with different data content. And just after that think about what stack is suitable to index your own data and try to deploy it on a server-side. Thanks!

AlexeyR
Автор

I have a spare iMac, nice to see it works on intel.
I get to use it for something other than youtube videos on the corner of my desk.

SCHaworth
Автор

That was really fantastic. Tm once at work i’d go full hog at it and they let you know! By the way, do you plan to implement DSPy somehow?

artur
Автор

Ollama works very slow on Windows. Tested using LM studio and it's working.

amirulbrinto
Автор

Next we just need to serve a webpage with authentication. Complete package.

ManjaroBlack