Ollama Course - 3 - How to use the Ollama.com site to Find Models

preview_player
Показать описание

Key topics covered:
- Understanding model parameters, tags, and layers
- Decoding model information (context length, architecture, etc.)
- Tips for comparing and selecting models
- Insights on model performance and benchmarks

Whether you're new to Ollama or looking to optimize your model selection, this video provides valuable insights to help you become an Ollama pro. Discover how to make informed decisions about which models to use for your projects, considering factors like performance, size, and specific capabilities.

Don't miss this essential guide in your journey to mastering Ollama! Subscribe for more videos in this free course and level up your AI skills.

#Ollama #AIModels #MachineLearning #TutorialSeries

(they have a pretty url because they are paying at least $100 per month for Discord. You help get more viewers to this channel and I can afford that too.)

00:00 - Start
01:13 - Experimenting with slow connection
02:39 - What's in the list
03:19 - Each model's info
04:02 - Parameters
05:44 - More details
06:59 - Ollama Layers vs Docker
07:51 - Whats needed to use a model
08:36 - Templates
09:22 - Context Size
10:21 - Tags
11:26 - How to find the right model
Рекомендации по теме
Комментарии
Автор

After watching this video, I can't stop singing "Das Model" from Kraftwerk. Thanks, Matt; this course is awesome.

fabriai
Автор

Loved the hints to choose the best model for the problem you want to solve

emen
Автор

Thanks for these Matt. Super useful. I hope you'll continue through to Open WebUI and it's more advanced features.

vexy
Автор

Hey, Matt. This is a spot on topic in a highly desirable and necessary course. Thank you. Just one question, You mentioned to be careful setting the context size 'cause you might run out of memory. Is that CPU or GPU memory? If you have a bit of GPU VRAM, does the main memory get used for more than just what a program might normally use for program storage and temporary data?

jimlynch
Автор

Thanks for taking time to make these videos!

squartochi
Автор

thank you very much Matt, this is really helpful

andrewzhao
Автор

TBH thought it would be a boring basic subject 😅 boy I was wrong!

Thanks for the video ❤ keep it up

MoeMan-fw
Автор

What would you day is the best model for pdf to json tasks? :) and is there a way to get the output without linebreaks? greetings

jonasmenter
Автор

It's too bad, we used to be able to filter by newest models including the user submitted ones. It was fun discovering new user models but now there's no way to do that.

FlorianCalmer
Автор

Great stuff. As usual I’d say. So, other than ‘hit and miss’ approach…any possible way you might suggest for hunting down the right model to use with Fabric, for instance?

unokometanti
Автор

"If, for example, I have more than one model downloaded, and one is chat, another is multimodal, and another generates images, can I make it so that Ollama chooses which model to use based on a prompt, or does it by default use the one you've chosen with the `ollama run` command?"

spacekill
Автор

Matt, thanks for your content. Is there an Ollama model that you can use to check for plagiarism? I am creating short articles using ChatGPT. Another question. Is there a command that can interrupt llama3.1 while it’s outputting an answer? /bye doesn’t work.

CrazyTechy
Автор

How can i download a model in .gguf format locally, my reason is am transferring the model to a computer being used remotely in a health facility with no phone or internet network.

mpesakapoeta
Автор

Hello sir can you explain me how to install cuda drivers and make ollama use gpu for running models

muraliytm