L 2 Ollama | Run LLMs locally

preview_player
Показать описание
Running large language models (LLMs) on your local machine can be incredibly useful, whether you're experimenting with LLMs or developing more advanced applications. However, setting up the necessary environment and getting LLMs to work locally can be quite challenging.

So, how can you run LLMs locally without the usual complications? Meet Ollama—a platform that simplifies local development with open-source LLMs. Ollama packages everything you need to run an LLM, including model weights and configuration, into a single Modelfile.

In this tutorial, we'll explore how to get started with Ollama to run LLMs locally. You can visit the model library to see the list of all supported model families. The default model downloaded is the one with the latest tag. Each model's page provides additional information, such as size and quantization used.

#llms #ollama #generativeai #genai #languagemodels #largelanguagemodels #deeplearning
Рекомендации по теме
Комментарии
Автор

Keep sharing such a knowledgeable content ma'am

arnavthakur
Автор

Finally, we got LLM lectures from you. OI would like it so much if you start making in-depth LLM lectures just like you did for Computer Vision. I am excited and looking forward to them.

vipulsarode
Автор

Great job Madam, Very informative! Ollama simplifies local LLM setup effectively.👍

GianmarcoGoycocheaCasas
Автор

great video please make more videos related to gen ai

bitcoinboss
Автор

Hi, im resepectfull for this video, and im interested, first can i ask you for your specification laptop/desktop. Cause im interested in ai specialy in object detection and i try to build some program and build in and try to run in my laptop, the result after i try is bad, but the result of training epoch in collab is good but when im pair in my program the result is bad. Do the specifications of our device really have that influence on the performance of the program we create?

nazaruddinnurcharis
Автор

Please make a video on all the genAI architecture, a dedicated one each to one specific model in each, or how to build a gpt model from scratch and agents and RAG

me
Автор

Amazing. I hope you could also make a video how to train LLMs on custom dataset like custom PDFs and then build a prompt for it.

mohammadyahya
Автор

Very useful tutorial, very good👍, one query on this, can we integrate and use both oLlama and streamlit together to get outcome from other LLM like Gemma or phi3?

puneetsachdeva
Автор

I get following error:
(env_langchain1) run llama3.1:405b
'ollama' is not recognized as an internal or external command,
operable program or batch file.

AlexHsrw
Автор

Dear Madam, I'm always facilitating many about your your YouTube lectures, but my laptop is window 10 but I'm trying to download ollam but it can't download i don't maybe there are other thing i can do, i need your help, its really good lecture you gave us.. Looking forward to read from you.

Kishi
Автор

can we make a chatbot application using stremlit by using ollama...If possible please make a video on that part

SHIVAMKUMAR-lfr
Автор

why will I use this if I can go to meta orgmini and run it there ?

FirstNameLastName-fveu
Автор

I have a doubt mam, when we say run- we are already using chatgpt on our device which process on cloud services how is it different when we use models running.

me
Автор

Can you please share your PC configuration?

amitsingha
Автор

Madam can you immplement person re identification using res net in next yolo using google golab

velugucharan
Автор

Please make a video but by using python

surflaweb