Mastering Llama3: The Definitive Guide to Ollama & Groq Provisioning and Leveraging LlamaIndex

preview_player
Показать описание
In this video, we will cover how to use open source model llama3 with LlamaIndex. We will provision llama3 using Ollama and Groq and we will also compare the execution time to identify which one performs better.

This video is part of full fledged LlamaIndex course published on Udemy.
Please check below link to go through the LlamaIndex tutorial from scratch:
*********************************************************************

Chapters :
***********************************************
0:00 How to get started with Ollama ?
0:58 Ollama's model library
1:32 How to run Ollama's llama3 model on your local?
2:18 Sample Demo application using Ollama's llama3
2:45 Build an application using LlamaIndex with Ollama's llama3
8:35 Introduction to Groq
9:25 How to generate API key using GroqCloud
10:23 Build an application using LlamaIndex with Groq's llama3
12:22 Compare execution time between Ollama's llama3 and Groq's llama3

Please refer the code here :
***************************************************

Please check other videos on Generative AI :
************************************************

Looking for interview guide on Kafka ?
************************************************
If you are planning for interview on real time process and looking for tech questions on Kafka and Spring Kafka, look no further and grab your copy of "75 Interview Questions on Kafka and Spring Kafka".

Рекомендации по теме
Комментарии
Автор

The real problem why we preferably choose the llama 8B instead of 70B is not about the storage size. It is more about the minimal RAM size of your machine.

mirzaakhena
Автор

Lovely explanation. Really clear. Well done.

asgorath