Faster LLM Inference: Speeding up Falcon 7b For CODE: FalCODER 🦅👩‍💻

preview_player
Показать описание
Falcon-7b fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library.
also we will see ,
How can you speed up your LLM inference time?
In this video, we'll optimize the time for our fine-tuned Falcon 7b model with QLoRA, PEFT library for Faster inference.

✍️Learn and write the code along with me.
🙏The hand promises that if you subscribe to the channel and like this video, it will release more tutorial videos.
👐I look forward to seeing you in future videos

What do you think of falcoder? Let me know in the comments!

#langchain #autogpt #ai #falcon #tutorial #stepbystep #langflow #falcons,
#llm #nlp,#GPT4 #GPT3 #ChatGPT #falcon #falcoder
Рекомендации по теме
Комментарии
Автор

When you said fine tune the model, you mean that we can fine tune in our text domain and then instruct it?

wilfredomartel
Автор

I want to fine tune ( not instruct) in my own data for examplae law. Any suggestion how to achieve that?

wilfredomartel