Falcon-7B-Instruct LLM with LangChain Tutorial

preview_player
Показать описание
This Tutorial teaches you how to use Falcon LLM with LangChain to build powerful OpenSource AI Apps

❤️ If you want to support the channel ❤️
Support here:
Рекомендации по теме
Комментарии
Автор

This youtuber is very nice and outperform others because he directly provide colab version for everyone, especially amateur.

raydenx
Автор

💯 Thanks for a perfect video. The audio is clear, the pace is perfect, the content is timely, accurate, concise and well presented. You, sir, are doing it right.

jaoltr
Автор

This guy ever has ever the best practical videos. His explanations are cristal clear and he ever shares a notebook! Thanks a lot!!!

biraescudero
Автор

Great video, thanks! Waiting for the Q&A application.

danasugu
Автор

Your timing on this is impeccable, I really needed this. Also I can't believe how impressive Falcon 7b is, this is crazy!

christopherchilton-smith
Автор

*Great tutorial, Thanks 1littlecoder*

QHawk
Автор

you're the best !! thx a lot, i learn a lot :D

alexandrelucas
Автор

This takes a lot of time when I use Falcon with RetrievalQA chain in Langchain. How do I make the inference time Faster?

HimanshuSingh-ovgw
Автор

Thank you ! for the amazing video . Could you make a video for fine tuning the model with personal dataset ?

KhalidMohamed
Автор

How would you pass the results back to Langchain for further refinement?

pleabargain
Автор

Thanks for the tutorial. Can do a tutorial on how to use falcon with documents answering 🙏

baderotaibi
Автор

Great video, thank you for this tutorial. have you any tutorial about this model using chroma db and WebBaseLoader to retrieve data from web and save on db?

odev
Автор

Sir please show how to use Falcon 7B and vector db to make chatPDF type of document

GiridharReddy-hbnv
Автор

Hello! how could i solve this error? ValueError: Could not load model tiiuae/falcon-7b-instruct with any of the following classes: (<class
'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, ).

matheusandrade
Автор

nice video sir! what about fine tuning the model?!

mariocuezzo
Автор

*I tried Falcon 180b demo on hf, it's good, even I have to try to give more specific prompts to achieve what I need*

QHawk
Автор

please tell how to get response by using custom data like text files or csv files. In langchain documentation they have given an example by using openAI embeddings. is it possible to do the same using huggingface embeddings?

satyaprasadmohanty
Автор

Dose someone compare the performance,result and runtime between falcon and openai davinci model?

raydenx
Автор

Great video, thank you so much! I have a question - using the method in your video, does the model actually run locally? So, it does not go to any external API which in turn could provide great safety of your chat history? Is that correct?

sadigasanov
Автор

Thank you so much for your ice explanation, however I am trying to run the notebook for the same code but when I try running the pipeline I keep getting this error
ValueError: The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for them. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in this format.
any clue how to solve this?

dlayel
visit shbcf.ru