How Good is LLAMA-3 for RAG, Routing, and Function Calling

preview_player
Показать описание
How good is Llama-3 for RAG, Query Routing, and function calling? We compare the capabilities of both 8B and 70B models for these tasks. We will be using GROQ API for accessing these models.

Signup for Advanced RAG:

LINKS:
Notebooks

TIMESTAMPS:
[00:00] LLAMA-3 Beyond Benchmarks
[00:35] Setting up RAG with llamaIndex
[05:15] Query Routing
[07:31] Query Routing
[10:35] Function Calling [Tool Usage] with Llama-3

All Interesting Videos:

Рекомендации по теме
Комментарии
Автор

Thank you bro. Today itself i switched the LLM in RAG to Llama - 3 8B. It is performing really well.

shameekm
Автор

I found that the llama-3-70b from groq does not do as well on the test rag task I ran versus a local version, so they might have quantized it a lot on groq.

jasonkwan
Автор

Would love to see a reliable way to utilize function calling on completely local model.

I saw a fine tuned model on HF designed for function calling, but users said that it had issues

Anyone know if this has been done locally? relatively reliably?

mchl_mchl
Автор

No module named 'packaging' - does not work in windows or wsl

csowmje
Автор

Can I run llm on 4 GB ram and 16 bit processor laptop? Please tell me

Content_Supermarket