Using Ollama for Local Large Language Models

preview_player
Показать описание

You can run the latest Mistral, Mixtral, Llama and Codellama models locally, without an H100 cluster.

Learn how to turn deep reinforcement learning papers into code:

Get instant access to all my courses, including the new AI Applications course, with my subscription service. $29 a month gives you instant access to over 40 hours of instructional content plus access to future updates, added monthly.

Or, pickup my Udemy courses here:

Deep Q Learning:

Actor Critic Methods:

Curiosity Driven Deep Reinforcement Learning

Natural Language Processing from First Principles:

Just getting started in deep reinforcement learning? Check out my intro level course through Manning Publications.

Reinforcement Learning Fundamentals

Here are some books / courses I recommend (affiliate links):

Come hang out on Discord here:

Рекомендации по теме
Комментарии
Автор

it has been a while, welcome back, i used to like alot your content, you use to have the most interesting and advanced teaching

claudiofernandes
Автор

Somewhere out there an AI that transcribes videos into text and is struggling to keep up with Phil's typing prowess!

matthewschneider
Автор

You get very good performance and yet it is not using much of its memory. Does it use GPU regardless? In general is the GPU memory size required to match model memory consumption? What specs do you have there in your server?

pexx