Replace Github Copilot with a Local LLM

preview_player
Показать описание
If you're a coder you may have heard of are already using Github Copilot. Recent advances have made the ability to run your own LLM for code completions and chat not only possible, but in many ways superior to paid services. In this video you'll see how easy it is to set up, and what it's like to use!

Please note while I'm incredibly lucky to have a higher end MacBook and 4090, you do not need such high-end hardware to use local LLM's. Everything shown in this video is free, so you've got nothing to lose trying it out yourself!

Рекомендации по теме
Комментарии
Автор

Might want to clarify to potential buyers of new computers there is a difference between RAM and VRAM. You need lots of "VRAM" on your graphics card if want to use "GPU Offload" in the software which makes it run significantly faster than using your CPU and system RAM to do the same task. Great video though.

toofaeded
Автор

A lot of great knowledge compressed in this 5 min video. Thank you!

programming
Автор

3:15 dude, you illustrated the 'break tasks into smaller steps and then build-up'-thing PERFECTLY. Well done! Its surprisingly hard to do this. I think about how to do it programmatically a lot.

phobosmoon
Автор

Your channel is going to blow up and you deserve it! Fantastic info, concise and even gave me hints to things I may not have known about like the LM Studio UI hint about Full GPU Offload. Also interesting take on paying for cloud spellcheck, I'd agree with you!

RShakes
Автор

Your last question was amazing. Never thought about it that way.

RobertLugg
Автор

Clear, concise explanation of the pros/cons of using local LLM for code assist

mikee
Автор

Excellent work. I was planning to write my own vs code extension but you just saved me a great deal of time. Thank you!

levvayner
Автор

Only 538 subscribers?

Too good of a content. Thanks You.

MahendraSingh-kole
Автор

So much knowledge compressed in only 5 minutes. Great job!

I will give it a try to see if it would be possible to make it faster on Apple silicon laptops using MLX

Gabriel-iqug
Автор

Hey Matthew. First video of yours that YouTube recommended and I liked and subbed. I tried ollama with a downloaded model and it ran only on the CPU so was staggeringly slow, but I'm very tempted to try this out (lucky enough to have a 4090). I'm also using AWS Code Whisperer as the price is right, so am thinking your suggestion of local LLM + Code Whisperer might be the cheap way to go. Great pacing of video, great production quality, you have a likeable personality, factual, and didn't waste the viewers time. Good job. Deserves more subs.

Aegilops
Автор

Thanks for sharing, I think this is very important when it comes to Data Security.

madeniran
Автор

This is great. Didn't know I could use LM Studio like this.
Also FYI there's a free alternative to Copilot called Codeium

Franiveliuselmago
Автор

Keep up the momentum and you will be arguably among the well-organized content creators, I really liked your explanation and demonstration process

square_and_compass
Автор

Great video and tutorial! Very well explained.

rosszeiger
Автор

Thank you for covering these topics - very informative!

therobpratt
Автор

Been using gen.nvim and Ollama for a while on a MacBook M1 chip. Will try this approach

TheStickofWar
Автор

Great content! Straight to it and i leart something

paulywalnutz
Автор

Great video Matt! Silly I came across this video. I have been playing around with integrating LLMs with Neovim some helpful content here! Hope all is well!

bribri
Автор

Great video. Just earned my subscription. I'm a heavy copilot user and have a machine with a great gpu (little bit short of the 4090's VRAM though) so ill be keen to see how your teating of completions go (will have time to play myself when im back from a long awaited vacation).

Gunzy
Автор

Hey this is excellent! This is exactly what i was looking for recently

mammothcode