Function Calling using Open Source LLM (Mistral 7B)

preview_player
Показать описание
In this tutorial, I delve into the fascinating world of function calling using the open source Large Language Model (LLM), Mistral 7B. Function calling is a powerful tool that can significantly enhance the capabilities of Gen AI applications. It allows for the integration of external web APIs, the execution of custom SQL queries, and the development of stable, reliable AI applications. By leveraging function calling, we can extract and leverage relevant information from diverse data sources, opening up a plethora of possibilities for developers and researchers alike.

Throughout this video, I demonstrate how to effectively utilize Mistral 7B for function calling. Whether you're looking to integrate external data into your AI project, execute complex queries, or simply explore the potential of open-source LLMs, this tutorial has got you covered.

Your support is invaluable to the continuation and improvement of content like this. If you found this tutorial helpful, please don't forget to like, comment, and subscribe to the channel.

To further support the channel, you can contribute via the following methods:

Bitcoin Address: 32zhmo5T9jvu8gJDGW3LTuKBM1KPMHoCsW

Join this channel to get access to perks:

Your contributions help in sustaining this channel and in the creation of informative and engaging content. Thank you for your support, and I look forward to bringing you more tutorials and insights into the world of Gen AI.

#llm #mistral #ai
Рекомендации по теме
Комментарии
Автор

Thanks for being both informative and also honest without cutting the attempts that didn’t work at first. It makes this video spontaneous and organic.

hanantabak
Автор

Appreciate this, was just starting this journey.

TheFxdstudios
Автор

Hey, awesome work! However, I am wondering about this function calling solution, whether it will still be as robust as function calling is through the OpenAI API / langchain / other API's like the new mistral one. It seems like all it involves is just telling the LLM to output the information in a JSON schema format according to the specified functions, and then playing around with the string output to extract the functions? Let me know if you have any thoughts on whether this approach you used is fully reliable in forcing the model to maintain the correct format. Otherwise, really appreciate the video and tutorial, was very helpful!

aveek
Автор

I followed the same code and approach but I dont see any out put other than this:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

vedarutvija
Автор

i think we also need to capture the call for the function and execcute the function .... on open-interpreter ?
instructor to create templaes for an exepcted output, and interpreter to execute the detected function...
i see that we can add these funcnctions to the wrapper of openAI... so if we use olama or lmstudio to host we can Add the functions to the OpenAI(Client)....
I noticed the model acted different when using a huggingface weights from using the gguf with llamacpp and from calling it via the api server llama/lmstudio ?
but for function calling it worked best on (lmstudio api) ...with the open AI client: ...
but with the openinterpretor / insructor combo you can build your own method...
as the open interpretor is also a wrapper which intercepts the responses (same as you did) and executes these functions after removing them from the input as passing the message along the wraper chain ... (hence you dont need chains either... as your creating your own chians)...

xspydazx
Автор

Your content is helpful. Thank you.
Is there an implementation for dataframelookup using LLM function calls?

lalluyoutub
Автор

New to this technology, so trying to wrap my brain around it. How does this apply to building agents in AutoGen or CrewAI? Does this video imply that you can't use locally hosted LLMs for agents? My little understanding is that agents use skills and skills are python functions...do I need to use the concepts you discussed in this in order to use local open-source models for agents or am I missing something? Thanks!

EmilioGagliardi
Автор

can u please show a function call example using api? for example weather api

vedarutvija
Автор

Thanks for the video, Is possible add externals api or only through of libraries?

TheRamseven
Автор

Appreciate the content. Can't access the Google Colab notebook. Thanks.

jatinkashyap
Автор

Is it possible to function call using Qwen 0.5b model ?

muhammadadribmahmud
Автор

This was exactly what i was wondering just few days back..

Thanks man appreciate it ❤

artsofpixel
Автор

Hey are you still active, could I get some help?

parkersettle
Автор

Holy shit, is it just me or this guy looks like Sundar Pichai???
I thought this was a Google demo or smth for a second

tech-genius
Автор

Been Waiting for function calling tutorial from your channel🦁 Aaj sher bs video dekhega kal implement karega😅

VijayDChauhaan
Автор

Can you show and example of using agents with langGraph?

Cam
Автор

hi sir
i am nikit, i have some problem choosing best rag pipeline. Can you help me in that?

nikitkashyap