Llama3.1 + Aider +Ollama: Easily Create Full-Stack Apps without Writing Code| Fully local #ai #llm

preview_player
Показать описание
In this video, we explore Meta AI's latest open-source model, Llama 3.1, and its integration with Aider to build a fully local full-stack application using Ollama. We'll provide an overview of the Llama 3.1 model and Aider, then demonstrate how to create powerful full-stack applications without writing code using Visual Studio Code
platform.

Llama 3.1 comes in three sizes: 405B, 70B, and 8B parameters. The 405B parameter model is on par with other closed-source models, making it ideal for building robust applications.
Aider allows you to pair programs with LLMs, writing, editing code, and building applications seamlessly.

#llama3.1 #opensource #generativeai #metais #localllm

Links:
Рекомендации по теме
Комментарии
Автор

Thanks for the informative video 👍

How much RAM does your graphics card have?

How much RAM does Llama 8B parameters need?

Did you use float16 or float32?

solank
Автор

Can you create one app with deepseek coder API as well?

joshwong
Автор

Hi 7:26 in this they said that want to Pull llama but you typed run llama. Are you install the llama 3.1:8B already? Are this is the step to install pls clarify this sir...

KABILAN.V.S
Автор

is it possible to serve llama 3.1 model to Ollama from Gdrive storage ?

SamirSELLAMI