The Ollama Course - Using the CLI

preview_player
Показать описание
Welcome back to the Ollama course! In this video, we dive deep into the command line interface (CLI) of Ollama, exploring all the powerful options and commands available. Whether you're a beginner or looking to enhance your skills, this free course on YouTube will guide you to become an Ollama Pro.
🔍 What You'll Learn:
1. Overview of Ollama CLI Commands: Learn how to list all commands using ollama -h or ollama --help.
2. Using the "serve" Command: Understand the importance of the "serve" command and how to use it effectively.
3. Creating New Models: Step-by-step guide on creating new models with ollama create, including using different model weights and prompts.
4. Running Models: Detailed instructions on running models with ollama run and exploring additional options like --format json and --verbose.
5. Managing Models: Learn how to pull, list, copy, and remove models using ollama pull, ollama list, ollama cp, and ollama rm.
📂 Hands-On Examples:
• Creating a model inspired by the Swedish Chef from the Muppets.
• Using model weights from Hugging Face and converting safetensors to gguf.
💡 Pro Tips:
• Experimenting with environment variables.
• Understanding the differences in model management across Windows, Linux, and Mac.
👍 Why Subscribe?
Stay updated with our free Ollama course every Tuesday and more in-depth videos every Thursday. By subscribing, you'll be the first to know about new videos and advanced tips to make AI a part of your life using Ollama.

Links & Resources:

Tags: #Ollama #AI #MachineLearning #CommandLine #Tutorial #FreeCourse #HuggingFace #ModelCreation #AIPro #TechTutorials

(they have a pretty url because they are paying at least $100 per month for Discord. You help get more viewers to this channel and I can afford that too.)

00:00 - Start
00:47 - The CLI Help
00:58 - Serve
01:47 - Experimenting with Serve
02:08 - Create
02:34 - Swedish Chef
03:35 - Import SafeTensors
05:22 - Show
05:44 - Run
06:12 - Pull
06:42 - List
06:53 - Copy
07:17 - Remove
07:28 - Push
Рекомендации по теме
Комментарии
Автор

Love this, I think we’ve all been doing just-in-time learning to run and keep up to date with what’s happening every couple of weeks. Great to tear it back to the foundations Matt

shuntera
Автор

Thank you for this awesome course, I‘m enjoying it!

ABatorfi
Автор

I'm really enjoying this series. Thanks.

jimlynch
Автор

Hey Matt! Off topic comment but I guess I'm feeding the ol' YouTube algorithm anyway!

I haven't watched your entire backlog so apologies if you've already covered this, but I'd love to see some content / videos on the following topics:

1. How can you use ollama in a production environment. Topics around infrastructure, reasonable techniques (e.g. handing off processing to async jobs when possible), cost, etc. I'm not sure how common this use case is but I am evaluating using something like llama 3.1 to help summarize some potentially very large text files and weighing cost differences between using something turnkey like openai's APIs vs figuring out hosting myself (well my company. There seems to be a lot less on production hardening some of these open source models (or I just haven't been paying attention!)

2. A "state of the union" high level overview of the options available to software developer new to using AI. This you have covered in a lot more detail in various forms, but an overview of what tools are actually at a persons disposal in terms of trying to use AI to solve some problem. When I first started looking at this stuff I thought the only options I had were buying a bunch of super computers to train models and learning a lot about doing matrix multiplication. But we have RAG, we have "fine tuning", we have modifying system prompts... a sort of high level overview of what a layperson can do, and perhaps where reasonable off-ramps for more advance use cases are would be super helpful (i.e. when do I need to brunch up on my linear algebra? :))

Thanks for your work!

shoelessone
Автор

Wonderful video, Matt. Thanks so much for sharing this.

fabriai
Автор

Excellent content Matt! Congrats! Keep on going.

artur
Автор

I love your videos! Your explanations are amazing, thank you!

derekf
Автор

this is amazing, super clear, thank you!

federicoarg
Автор

Man. I love it. I already subscribe to it. Something I really will be crazy to know is how to story my ollama local models in an external hard drive in Mac. As you know macs doesn't have much space. So, i bought a special hard drive that runs at 40G/sec to have models and other stuff and I will love to have the models in there than in my internal hard drive. Thanks for the great content and explanations.

ISK_VAGR
Автор

ah... the ollsma serve... LOL i was wasted a week until i realized it was user issue in Linux, i felt so stupid having duplicate models, and things... this is really good video any one new to ollama should watch this if i watch this before i wouldn't waste a week just to realized how stupid i am the simple user issue...

NLPprompter
Автор

Please share the link of the video for reducing the model size for specific tasks, example, only weather, is wouldn't need the hole context for this

mpesakapoeta
Автор

Hi Matt, thank you for another amazing content.
I'm working with ollama and other tools available from community to develop some solutions for my company.
I need some help from a professional consultant for this job.
Could you work with me, or, maybe, recommend a person who can help me to do it?

PBrioschi
Автор

I have a noob question. If anybody can upload a model on Ollama, is it possible for a malicious user to upload malware disguised as a model? And are there measures to prevent such a scenario.

pythonantole
Автор

I wouldn’t recommend creating models the legacy Q4_0 quant types, they’re depreciated and are worse quality than K quants (or IQ if you’re running with CUDA)

sammcj
Автор

Removing models is the most annoying part because you have to name it exact. Wish they made it easier to just select and delete via GUI or list and select to remove by a number

vulcand
Автор

what location to run that download hugging face model command? and where does it download to? same location as the others wheres that?

JNET_Reloaded