META's New Code LLaMA 70b BEATS GPT4 At Coding (Open Source)

preview_player
Показать описание
Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? ✅

Rent a GPU (MassedCompute) 🚀
USE CODE "MatthewBerman" for 50% discount

My Links 🔗

Media/Sponsorship Inquiries 📈

Links:

Disclosures:
I am an investor in LMStudio
Рекомендации по теме
Комментарии
Автор

I'm creating a video testing Code LLaMA 70b in full. What tests should I give it?

matthew_berman
Автор

Could you make a video on how to train an LLM on a GitHub repo and then be able to ask questions and instruct it to make code, for example, a plug-in?

bradstudio
Автор

Thanks for actually showing the errors you encountered and keeping it as real as possible! Great and enjoyable content❤

lironharel
Автор

I asked it to write a program to connect bluetooth 3D glasses to a PC.
it responded:
It's not a good idea, because bluetooth is limited by 10m. Use wi-fi.
I said:
10m is good enough for me, please write this program.
-Ok, I will.
And that was it 😆

DanOneOne
Автор

Mixtral 8x7B was able to build a working snake game in python here...

EdToml
Автор

The error comes from libGL failing to load and is clearly NOT in teh code that codellama wrote. It's a problem with your machine's graphics drivers.

auriocus
Автор

0:00 1. Meta's New CodeLama 70B 👾
Introduction to Meta's latest coding model, CodeLama 70B, known for its power and performance.

0:22 2. Testing CodeLama 70B with Snake Game 🐍
The host plans to test CodeLama 70B's capabilities by building the popular Snake game using the model.

0:25 3. Announcement by AI at Meta 📢
AI at Meta announces the release of CodeLama 70B, a more performant version of their LLM for code generation.

0:56 4. Different Versions of CodeLama 70B 💻
An overview of the three versions of CodeLama 70B: base model, Python-specific model, and Instruct model.

1:21 5. CodeLama 70B License and Commercial Use 💼
Confirmation that CodeLama 70B models are available for both research and commercial use, under the same license as previous versions.

1:40 6. Mark Zuckerberg's Thoughts on CodeLama 70B 💭
Mark Zuckerberg shares his thoughts on the importance of AI models like CodeLama for writing and editing code.

2:37 7. Outperforming GPT-4 with CodeLama 70B 🎯
A comparison between the performance of CodeLama 70B and GPT-4 in SQL code generation, where CodeLama 70B comes out as the clear winner.

3:25 8. Evolution of CodeLama Models ⚡
An overview of the various versions of CodeLama models released, highlighting the capabilities of CodeLama 70B.

4:21 9. Using Olamma with CodeLama 70B 🖥
Integration of CodeLama 70B with Olamma for seamless code generation and execution.

5:18 10. Testing CodeLama 70B with Massive Models 🧪
The host tests the performance of CodeLama 70B using a massive quantized version and shares the requirements for running it.

5:47 11. Selecting GPU Layers
Choosing the appropriate number of GPU layers for better performance.

6:08 12. Testing the Model
Running a test to ensure the model is functioning correctly.

6:43 13. Running the Test
Requesting the model to generate code for a specific task.

7:27 14. Generating Code
Observing the model's output and determining its effectiveness.

8:16 15. Code Cleanup
Removing unnecessary code and preparing the generated code for execution.

8:40 16. Testing the Generated Code
Attempting to run the generated code and troubleshooting any errors.

9:09 17. Further Testing
Continuing to experiment with the generated code to improve its functionality.

9:15 18. Verifying CodeLama70b's Capabilities
Acknowledging that CodeLama70b has successfully generated working code.

9:20 19. Conclusion and Call to Action
Encouraging viewers to like, subscribe, and anticipate the next video.

Generated with Tubelator AI Chrome Extension!

TubelatorAI
Автор

GPT-4 ranks at 86.6 on HumanEval versus CodeLlama's 67.8. Meta used the zero-shot numbers for GPT-4 in their benchmark comparison, which is pretty dishonest.

emmanuelgoldstein
Автор

For LLM and CodeLlama inference, the M3 Max with 64GB of unified memory (50 GB actuallu usable) seems promising. So, it would be interesting to see how Macs would perform for quantized 70B param LLMs...

technerd
Автор

I have codellama 70b working well on ollama. Rtx 4090 / 7950x / 64gb. The newest version of olama uses about 10-20% gpu utilization and offloads the rest to the cpu, using about 55% of the cpu. Overall it runs reasonably well for my use.

pcdowling
Автор

Cool, all the time we get better and better open source models!

marcinkrupinski
Автор

It would be great to get a price breakdown on how much computer you need to have to get in the door to run these locally and compare those ranges to the VM host options.

BOORCHESS
Автор

I appreciate your disclosure. I intend to check this out.

theguildedcage
Автор

2:14: Not in the near future. AI is still programming worse than a junior programmer. Right now it's almost as good as a code monkey.

brunoais
Автор

can you create a video on how to setup these LLMs on VSCode with the extension like Continue, Twinny, etc? I have downloaded Ollama and have downloaded the models i need but im not sure how to configure them to run on the extensions on vscode

kevyyar
Автор

1. Install ollama
2. Run 'ollama run codellama:70b-instruct'

No forms or fees. Two or theee clicks and you're running.

K.F-R
Автор

I'm running LMStudio on my system running Debian bookworm 12 and it's running good. Really want to be able to run models locally on this system to to my work when I'm home. Any ideas about local models etc. would be helpful

iqlrwcl
Автор

AI will replace some Jr devs. Never replace coding entirely as you suggest.

DoctorMandible
Автор

awesome videos man, I learn so much from these. I wish there were models tuned for c# though. Very few of us create large applications with python.

allenbythesea
Автор

Coding battles, LLaMA crushes with its mad skills yet stays so chill.

Noobsitogamer