Boost Productivity with FREE AI in VSCode (Llama 3 Copilot)

preview_player
Показать описание
🚀 Dive into the future of coding with our detailed guide on integrating Llama 3 into your Visual Studio Code setup! In this video, we walk you through downloading and setting up Llama 3 locally to create a private co-pilot, enhancing your coding efficiency. Learn how to automate code writing, refactoring, and error fixing to boost productivity and code quality dramatically.

👉 What you'll learn:
Download and install Llama 3 and Code GPT on VS Code.
Configure your AI co-pilot for optimal coding support.
Generate and refactor code effortlessly.
Connect your code to a SQL database with just a few commands.

🎯 Why Watch This?
Enhance your programming skills with AI tools.
Speed up your coding projects and reduce errors.
Learn to set up and use one of the most powerful coding tools available.

📌 Don't forget to:
Subscribe for more videos on Artificial Intelligence and coding.
Like this video if you find it helpful, and share it with fellow coders.
Comment below with any questions or what you'd like to see next!

🔗 Resources:

Timestamps:
0:00 - Introduction to Llama 3 and VS Code integration
1:00 - Downloading and setting up Llama 3
2:24 - Configuring AI co-pilot settings
3:22 - Writing and running your first AI script
5:00 - Debugging and documentation tips

#VSCode #Free #Copilot
#VSCodeCopilot #VisualStudioCode #VsCode #GithubCopilot #VSCode #Copilot #AI #AICoding #GithubCopilotTutorial #GithubCopilotVSCode #LocalCopilot #PrivateCopilot #CodeCopilot #LlamaCopilot #OllamaCopilot #FreeCopilot #FreeVSCodeCopilot #LocalVSCodeCopilot #PrivateVSCodeCopilot #Llama3Copilot #Llama3Code #CodeLlama3 #Llama3VSCode #Llama3VSCode #VSCodeLlama3 #VSCodeExtension #VSCodeExtensionLlama #VSCodeExtensionLlama3
Рекомендации по теме
Комментарии
Автор

Very impressed with the 8B L3 in regards to coding. Amazing how much progress they have made.

jeffwads
Автор

The buttons don't do anything... note i'm working off line. The 4 buttons at the bottom of the add-ins panel just copy the code to the chat window. They don't do anything else and once clicked, the AI stops responding to questions. When i asked it what was wrong with the "explain selected code" the AI responded "nothing, its only meant to copy the code. Anyone know if this is broken for me or its simply an incomplete add-in...?

m
Автор

Does CodeGPT require me to be logged in? I'm all set up but if I ask it to explain something it just says "something went wrong! Try again.". Then i have to either quit and restart vscode or disable then enable the extension...

m
Автор

do we need both llama3:8b and instruct? can we not work only with instruct? Also I see your code works faster - could you specify your PC / system specs and config as it takes a good amount of time on my iMac 2017

kannansingaravelu
Автор

This was a great quick lesson. One thing I was seeing if anyone figured out, often I need to refer to very new documents on API etc, has anyone tied this into like. RAG structure, so we are always looking at latest document?

prestonmccauley
Автор

Any ideas about how this works on large scripts? What's the context length?

Fonzleberry
Автор

Very good tutorial. You don't speach about the platform; can I assume it will work both Windows and Linux? Another thing: What's the recommended hardware configuration to install Llama 3 locally in our computers?

joseeduardobolisfortes
Автор

Great video! How do I connect to my own local Ollama server running on my local machine with this?

GTGT
Автор

Latency is pretty bad when im using llama3:70b on vscode for CodeGPT. I am on windows . I guess its with the underlying machine . Anything can be done here?

harikantipudi
Автор

Thanks, can you make video of pythagora using llama 3?

ah
Автор

There's pretty cool as a starting point. I think A.I in future will do many other thinks to help us on achieving more productivity.

carta-viva
Автор

guys i installed it according to the vid but i cant run the ai and i saw somewhere that i need to put it in PATH but i dont know where the files are installed

ErfanKarimi-epie
Автор

how do I use another computer running ollama on my LAN?

yagoa
Автор

it works in macbook air m3 16gram, a little slow but can be used. Thank you Mervin.

kesijack
Автор

thank you so much for this video, is it open source please? can we find the weights files and use it ?

dorrakallel
Автор

Any option to use it with intellij ide?

MosheRecanati
Автор

amazing content! maybe you can create a long video where you use this to create a full stack application

MaorAviad
Автор

Thanks for sharing.
I host the ollama server on a remote server. How do I make it connect to the remote machine instead of localhost?

Автор

does it slow down my laptop if I run it locally? would I be better off running haiku on cloud? what would you recommend, I'm just getting into code

alinciocan
Автор

I am using this is insane 😮 I think full stack developers will not like their future holy crap.

martin