Run DeepSeek R1 Locally. Easiest Method

preview_player
Показать описание

Links:

Summary
DeepSeek R1 is a free and open source llm that rivals even the best paid AI models like ChatGPT-o1. I’ll show you how to use it on the web, and how to run it locally to avoid all the privacy concerns. The easiest method available through LM Studio.

Chapters
0:00 Intro
0:23 DeepSeek Website
1:16 Perplexity & Groq
1:33 LM Studio
2:56 Futurepedia
Рекомендации по теме
Комментарии
Автор

2:12 Bro really flexed the 5090 in a tutorial 😭🙏

NithinBalakrishnanIsOnline
Автор

this is the first "run deepseek r1 locally" video i've seen using lm studio and idk why. it looks like it is really easy to set up with minimal steps. i already had downloaded it but never saw anybody use it. thanks for showing it in use and helping me confirm it will not present any problems.

phosgene
Автор

Great tutorial. Got me up and running in 10 mins. Sidenote: crazy how Nvidia sends out 5090s like candy but regular consumers can't get their hands on one.

jojimonty
Автор

I had a different computer recently. And I have looked for your video under a new Gmail account for the last 3 weeks. Because I did not remember the title, I could not find it but you now. Thank you for making this video. I truly appreciate it!

no-winteronly_snow
Автор

super interesting, I got it running and I thought the first question I asked would be the common "what is the meaning of life" to answer it didn't even use chain of thought.

I had to ask it to use chain of thought then it did. Do you know if there's a setting to make it always use the CoT?

petersalazar
Автор

I did everything to the T And set up the configuration exactly as this guy explained but it just cannot get that question right. The right answer is 28 days but for me it just says 2 days or 1 days. I am running a dell Alienware laptop model 17 R4. It's about 10 years old with 32gb ram and a gtx1070 GPU with 8gb vram. I am running the 8b model and lm studio shows it is fully compatible with my gpu

homaassal
Автор

This is a very good one! Thanks for sharing how to deploy it on a mac. Could you please make a tutorial to demonstrate how to deploy it on iPad or iPhone?

clarkbai
Автор

2:12 poor you, you look so disappointed for having received that 5090! thanks for taking the bullet for all of us.

badmovi
Автор

Where do I get on the Nvidia list for free 5090s?

ModMediaFactory
Автор

I have 24 RTX 3070s connected to my mother board via USB risers--yes I mine. Is it possible to set this up using all 24 3070s?

ColinCochranT
Автор

which deep seek model is better to download?

aperson
Автор

I trust google, Facebook and open Ai much less

ameliatah
Автор

How many MW of energy does it consume per query for the reasoning model?

yugiohonline
Автор

Increase swap file size to run that 70b, just don't expect speed.

deadeyeduncan
Автор

What's wild to me is that single question fils to nearly 300% of your context window. I'm running the 14B model and that single question fills to about 50% of my context. I thought the context window was the same between models... is it larger for larger models?

mikicerise
Автор

I am running it locally just fine with the docker. My question is, today there is an update to the WebUI on github. How do I update that code for the container? I am used to running dockers and containers in unRAID but not on Windows. Any insight would be great.

TAGSlays
Автор

Is it deliberate that you didn't show the local "thinking" / processing time? Why didn't you compare that since you mentioned it for the cloud version? How useful is the smaller model compared to the larger one? Granted it will be different for everyone's hardware, but would have been helpful to mention the basic specs of GPU you're running it on and how long it took.

SonicEcho
Автор

Ok, I'm looking to run it locally so I don't send any information, but what does LM Studio do? Is it free and open source?

tumacho-dt
Автор

Can I run it using cloud, like GDrive/OneDrive? So it is not directly on my machine?

kiwiqi
Автор

Maybe a strange question, but shouldn't it be possible to extend your local set-up with a cloud service that provides more cpu/gpu for running large models like this locally without a massive computer?

dennisbiesma
join shbcf.ru