Ollama Tool Call: EASILY Add AI to ANY Application, Here is how

preview_player
Показать описание
🌟 Welcome to our latest tutorial on Ollama Tool Calling! 🌟

In this video, we’ll dive deep into using the Ollama tool to enhance your applications with function calling. Follow along as we create a weather application that can fetch real-time data and integrate it seamlessly with a large language model (LLM). Ollama Tool Call: EASILY Add AI to ANY Application, Here is how. Ollama Function Calling.

What You’ll Learn:

• Basics of Ollama Tool Calling: Understand how function calling works.
• Creating a Weather App: Step-by-step guide to building and integrating the app.
• Integrating LLMs: Connect your app to large language models for real-time responses.
• Practical Examples: See how tool calling enhances AI capabilities.

Why Watch?

• Empower Your Projects: Learn to use powerful tools for better AI integration.
• Hands-On Learning: Detailed steps and code snippets provided.
• Expand Your Skills: Perfect for developers looking to enhance their toolset.

By the end of this video, you’ll have a functional weather app integrated with Olama’s tool calling, ready to fetch and display real-time weather data. Let’s get started! 🚀

🔗 Links:

💡 Subscribe and hit the bell icon to stay updated with the latest in AI and development tutorials!

Timestamps:
0:00 - Introduction to Ollama Tool Calling
0:05 - Overview of Function Calling
0:18 - Why Tool Calling is Important
1:12 - Creating a Weather Application
2:07 - Installing and Setting Up Ollama
3:00 - Coding the Get Weather Function
3:45 - Integrating Function Calling with Ollama
5:54 - Running and Testing the Application

📢 Don’t forget to like, share, and subscribe for more tutorials and AI updates!
Рекомендации по теме
Комментарии
Автор

Would you do a video on the best speech to speech user interface for LLMs? I’m trying to mirror the GPT-4O set up but with a smaller model I can run entirely local. Would be great to be able to access it with iOS for mobile use 🙏

Centaurman
Автор

I re-subscribed, because you returned to the videos, I initially subscribed for.

Please add an example, that shows how to use multiple tools and how to implement no-tool case handling.

MeinDeutschkurs
Автор

Good video!
But if you give multiple tools to Ollama you also need to check the tool name in the response to make sure you call the right one!
Keep it up!

jacopoguzzo
Автор

👍. But it would be better to abstract the function call as you did for the argument. 😉

tool_mapping = {"get_current_weather": get_current_weather}

for tool_call in tools_calls:
selected_tool =
tool_output =

print(tool_output)

=> The current temperature in {'city': 'Paris'} is: 36°C

fa-ip
Автор

Great video !!! Can you make a video where you create a custom tools

maalonszuman
Автор

Hi Mervin can you do a tutorial for the new model llama3-Groq-70b-tool ?

Automan-AI
Автор

You mentioned that this is possible with "any LLM" at 1:48. That is incorrect. Only models which are specifically trained on tool calling can work with this integrated approach for function calling.

smorty
Автор

@MervinPraison How to add this ollama tool in crew AI

AshokKumar-mqt
Автор

@mervin praison
Hello Mervin, it could be very nice if you cover how to integrate claude-engineer but with ollama models instead
Thanks for all

benoitcorvol
Автор

How does the model know how to extract the city name from your prompt? What if you are asking a generic question about Toronto? Will it know not to call the function? Would be great to show an example application where you have a prompt classifier to determine which tool to call.

jrfcs
Автор

I'm trying to add the API callback to the conversation messages, to continue the conversation with llama3.1.
In chatgpt is something like this:
{
"role": "tool",
"name": "get_current_weather",
"content": "{\"temperature\": 18}",
"tool_call_id": "jijjhgg9jimgvlme"
}
But I don't know if "role": "tool" is supported by the ollama API.
Does anyone know how to do this?

josejimena
Автор

Great info Mervin! Do you need to be exact with the chat question? Or can the LLM figure out you want the weather and interact to give you a result. For example, User question to LLM: I was wondering about the weather. LLM aware of its tooling, Answer from LLM: I can give you the current weather in a city if you provide it for me. User response: Melbourne Australia. -- I would love to know about this sort of interaction with LLM and tooling.

LanceWhite
Автор

when I run the script, tool_calls does not appear. Why do you think it would be ?

enesgucuk
Автор

Sir please make a video how to access ollama with public IP address using RDP so that we can host ollama to RDP and access it from anywhere.

Anyone know that, Help me please.

joshimuddin
Автор

Not a very good example.. u can get the same result just with the code without ever needing an AI, if we have to use AI then at least use it on a good example that actually does something a code alone can't do.

InsightCrypto
Автор

Apparently there's code execution as well. Curious if you can do it in a separate doctor container and have it automatically generate the requirement.txt file for libraries

iukeay