Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM

preview_player
Показать описание
In this video, we are installing two user-friendly tools that make downloading, running, and managing a powerful local LLM to replace ChatGPT. Seriously.

Today, with only a desktop computer with a retail GPU, and two free applications you will get a fully private Local LLM + RAG chatbot running in less than 5 minutes!

This is no joke - the teams at LM Studio and AnythingLLM are now fully integrated for your convenience. Run models like LLama-2, Mistral, CodeLLama, and more to make your dreams a reality at no expense of privacy.

Chapters:
0:00 Introduction to LMStudio x AnythingLLM
0:57 What is AnythingLLM?
1:20 Installing LMStudio
1:53 Installing AnythingLLM
2:10 LMStudio Basic use tutorial
4:28 Testing out our model
5:32 How to level up your LLM chat abilities
6:00 Connecting LMStudio to AnythingLLM
7:53 Send a basic chat on AnythingLLM to our custom model
8:26 Adding knowledge to our LMStudio model
10:08 What the future of chat with local LLMs is going to look like
Рекомендации по теме
Комментарии
Автор

Please do a dedicated video on training minimal base models for specific purposes. You're a legend. Also a video on commercial use and licensing would be immensely valuable and greatly appreciated.

codygaudet
Автор

I’m just about to dive into LM Studio and AnythingLM Desktop, and let me tell you, I’m super pumped! 🚀 The potential when these two join forces is just out of this world!

PCFixRetroZone
Автор

I’d love to hear more about your product roadmap - specifically with how it relates to the RAG system you have implemented . I’ve been experimenting a lot with Flowise and the new LlamaIndex integration is fantastic - especially the various text summarisation and content refinement methods available with a LlamaIndex based RAG. Are you planning to enhance the RAG implementation in AnythingLLM?

sitedev
Автор

Great stuff, this way you can run a good smaller conversational model like 13b or even 7b, like Laser Mistral.
Main problem with this smaller LLM are massive holes in some topics, or informations about events, celebs or other stuff, this way you can make your own database about stuff you wanna chat.
Amazing.

JohnRiley-rj
Автор

This is exactly what I've been looking for. Now, I'm not sure if this is already implemented, but if the chat bot can use EVERYTHING from all previous chats within the workspace for context and reference... My god that will change everything for me.

kylequinn
Автор

Thank you, I've been struggling for so long with problematic things like privateGPT etc. which gave me headaches. I love how easy it is to download models and add embeddings! Again thank you.

I'm very eager to learn more about AI, but I'm absolute beginner. Maybe video on how would you learn from the beginning?

bradcasper
Автор

Fantastic! I've been waiting for someone to make RAG smooth and easy :) Thank you for the video!

autonomousreviews
Автор

The potential of this is near limitless so congratulations on this app.

VanSocero
Автор

Thanks a ton ...you are giving us power on working with our local documents... its blazingly fast to embed the docs, super fast responses and all in all i am very happy.

vivekkarumudi
Автор

thanks for the tutorial, everything works great and surprisingly fast on M2 Mac Studio, cheers!

TazzSmk
Автор

Just got this running and it's fantastic. Just a note that LM Studio uses the API key "lm-studio" when connecting using Local AI Chat Settings.

jimg
Автор

You deserve a Nobel Peace Prize. Thank you so much for creating Anything LLM.

_lull_
Автор

Mm...doesn't seem to work for me. The model (Mistral 7B) loads, and so does the training data, but the chat can't read the documents (PDF or web links) properly. Is that a function of the model being too small, or is there a tiny bug somewhere? [edit: got it working, but it just hallucinates all the time. Pretty useless]

NigelPowell
Автор

How well does it perform on large documents. Is it prone to lost in the middle phenomena?

Augmented_AI
Автор

Bro, this is exactly what I was looking for. Would love to see a video of the cloud option at $50/month

monbeauparfum
Автор

Wow, great information. I have a huge amount of documents and everytime I search for something it's getting such a difficult task to fulfill

Heliosst
Автор

So if in case we need to programmatically use this, does anythingllm itself offer a ‘run locally on server’ option to get an API endpoint that we could call from a local website for example? i.e. local website -> post request -> anythingllm (local server + PDFs)-> LMstudio (local server - foundation model)

continuouslearner
Автор

I get this response every time:
"I am unable to access external sources or provide information beyond the context you have provided, so I cannot answer this question".

Mac mini
M2 Pro
Cores:10 (6 performance and 4 efficiency)
Memory:16 GB

jakajak
Автор

Absolutely stellar video, Tim! 🌌 Your walkthrough on setting up a locally run LLM for free using LM Studio and Anything LLM Desktop was not just informative but truly inspiring. It's incredible to see how accessible and powerful these tools can make LLM chat experiences, all from our own digital space stations. I'm particularly excited about the privacy aspect and the ability to contribute to the open-source community. You've opened up a whole new universe of possibilities for us explorers. Can't wait to give it a try myself and dive into the world of private, powerful LLM interactions. Thank you for sharing this cosmic knowledge! 🚀👩‍🚀

cosmochatterbot
Автор

IMO anythingLLM is much userfriendly and really has big potential. thanks Tim!

stanTrX