Use Langchain with a Local LLM

preview_player
Показать описание


----

This comprehensive video tutorial illustrates how to effectively utilize Langhain with a local large language model (LLM) - a worthy alternative to commercial chatbots like ChatGPT, providing unique benefits and flexibility. We explore various open-source, locally-hosted language models that offer unrestrictive, customized language interactions. For a deeper understanding of LLMs, we point to valuable resources such as the 'local Lemma' subreddit.

With Langchain, a versatile framework for applications powered by models like GPT, we walk through the entire installation process on an Apple Silicon Mac M2. By following along, viewers will learn how to clone and set up the necessary GitHub repos, install requisite packages via pip, and even choose between CPU-based and GPU-based inference depending on their specific needs.

If you're interested in running models like ChatGPT locally, this tutorial offers an in-depth guide to setting up and testing a local LLM using LangChain. With the goal of making LLMs and AI more accessible, this tutorial provides all the necessary insights and resources to help viewers navigate the fascinating world of local LLMs.

Keywords: Local Large Language Model, Lang Chain, ChatGPT Alternatives, AI Development, Python, Machine Learning, Open-source AI, Tutorial, CPU-based Inference, GPU-based Inference.

Don't forget to like, comment, and subscribe for more in-depth AI development tutorials.

---

#localllm #localAI
Рекомендации по теме
Комментарии
Автор

Would love to see a version appended for running LangChain with LM Studio

ajacobs
Автор

Thanks, this is great. Can you do a tutorial on RAG with langchain on local install?

Larimuss
Автор

Would be interested to see langchain working with openai api emulated via text-generation-ui

marc
Автор

for some resason when i try the gpu inference command i just get the error is not recognized as the name of a cmdlet, function, script file, or operable program."

kingpoki
Автор




Alternatively: there's also which only take a few minutes to convert a 8G model if you want to convert you ggml models.


Thanks to @XinFeng-jy8ie and Bishwa for their feedback on the fix.

CloudYeti
Автор

Hi, im trying to make a local pdf chatbot. Is it possible to use langchain with an open source embedding NN?

skeptomai
Автор

I'm looking for using local LLM's with LangchainJS. Do you have an idea if this is possible?

thijssmudde