Local LLMs on Apple Mac - powered by MLX!

preview_player
Показать описание
In this short video, we walk through how to run large language models directly on your MacBook in 3 lines of code!

Powered by MLX & Hugging Face Hub! 🤗
Рекомендации по теме
Комментарии
Автор

This is fantastic !!! Please keep doing MLX videos !! I'm not a programmer but I'm trying to learn how to use this stuff and this is the MOST INFORMATIVE MLX video I have found yet ! I would love to see more ways to use MLX (how can I see Token/s, etc)
Keep up the great work!

billybob
Автор

Great Video! I am having a hard time finding where the files are locally stored/cached

bachbouch
Автор

I want to know what are differences between ollama and MLX?

alibahrami
Автор

Lee Donna Hernandez Larry Thompson William

MariaThompson-dy
Автор

Hi, I downloaded the "mistralai/Mistral-7B-v0.1" model and tried generating a response. But it seems to take 20 mins to generate a response. Any idea why?

andres_junik