Using Langchain with Ollama and Python

preview_player
Показать описание
Ollama is already the easiest way to use Large Language Models on your laptop. But now we integrate with LangChain to make so many more integrations easier. Check out this tutorial to get started with using LangChain and Python with Ollama.

00:00 Introduction
00:49 Start Building
01:30 Load the Document
02:03 Split the Text
02:53 Add to Vector Database
03:39 Build a Chain
04:38 What Else Can You Do
Рекомендации по теме
Комментарии
Автор

man you didnot talk a single bs word, thats great respect to saving audience time, much appreciated!

uwstvnz
Автор

This is gold! thanks a lot, very clear and easy to understand

TK-bwqy
Автор

2 months ago? super impressive! I believe ollama has its own embeddings now. Thank you so much for the video. Could you do a video about datastore to have a persistent memory or something with autogen using multiple ollama models as agents if possible?

ilteris
Автор

This is great. Easy understanding. Thanks very much for sharing. Also, How to point to a directory. I would love to point to a vault of obsidian to have a summary of it .

RaspberryPi-gsts
Автор

Thanks for your video, I really enjoy your relaxed, straightforward and easy-to-follow style. Since Ollama now supports embeddings and you mentioned the advantages of storing vectors, would you like to make a new video, showing - for example - Ollama with langchain and lancedb in action? Thank you for your great content and especially for Ollama, of course!

ChristianMayrde
Автор

Do you have any video about using tools/custom tools with ollama and Langchain?

TK-bwqy
Автор

SentenceTransformerEmbeddings can allow encoding using sentence-transformer.

mikelam
Автор

That vecctor store is created at runtime. What if we want to create chroma database? Will the retriever be the same and how do you think we can utilize that while prompting the LLM?

aimalrehman
Автор

Is there anyway to keep the model loaded on the GPU so that I do not have to wait for a long time to get the output (I am using notebook)

biological-machine
Автор

Nice ending ;-P But really, thanks for sharing.

kenchang
Автор

Any idea why the GPT4ALL would download the .bin file then say invalid model type?

omarei
Автор

Why do I get ModuleNotFound error with both import ollama and from langchain.llms import Ollama after I have downloaded them both in the env? Why are they not there? What am I missing?

makdavian
Автор

It would be very nice to have a video with javascript. There are plenty of them using OpenAi, but with ollama it has to be a bit different. We could learn a lot from it and thanks if you do it!

gyozolohonyai
Автор

So, i find the problem with gpt is it refuses i/o functionality. Im looking to be able to say "open the pdf, read some details and fill out 10 cells in excel".

chizzlemo
Автор

Thanks so much, Matt! Please post your code when you get a chance.

technobabble
Автор

hi Matt, I'm going crazy with this error "ImportError: cannot import name 'Ollama' from 'langchain.llms'" I SWEAR I installed langchain's last version. When I go to see the package content, in the llms folder, there's nothing called "Ollama" so I understand there must be some problem there. I already tried pip install 'langchain[all]'

AlanDaitch
Автор

How can i stream the response, I am using stream lit but unable to stream the response, will appreciate your help
```
chat_model = ChatOllama(
base_url=ollamaPath,
model=modelName,
temperature=temperature,
verbose=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])),
)

qa = RetrievalQA.from_chain_type(
chat_model,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
chain_type_kwargs={"prompt": promptRepository.basicPrompt()},
)

qa({"query": prompt})['result']
```

aadityamundhalia
Автор

can you please tell me what ide youre using?

jonin-xikq
Автор

The tutorial is great, just be careful with langchain imports, some of them are deprecated and apparently its something usual.

ailenrgrimaldi
Автор

Great video! Please consider revisiting this topic with Ollama embedding and fulfill your promise to see this done in JavaScript. 😅

martinisj
welcome to shbcf.ru