CODE-LLAMA: Is it Actually Good?

preview_player
Показать описание
In this video, we will do comparison between the code generated by
code-llama and ChatGPT (got-3.5). The results will surprise you!

#codellama #llama2 #chatgpt
▬▬▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
LINKS:
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
All Interesting Videos:

Рекомендации по теме
Комментарии
Автор

Want to connect?
|🔴 Join Patreon: Patreon.com/PromptEngineering

engineerprompt
Автор

🎯 Key Takeaways for quick navigation:

01:24 🧮 Code-llama successfully implements a simple calculator function in Python.
02:49 🐇 Code-llama and ChatGPT both correctly implement a Fibonacci series function in Python.
03:29 📝 Code-llama and ChatGPT provide correct implementations for removing duplicate items from a list in Python.
04:48 🔒 Code-llama and ChatGPT both successfully implement a password validation function in Python.
08:23 🔄 Code-llama and ChatGPT fail to correctly group matching letters in a string, but GPT-4 successfully solves the problem.

Made with HARPA AI

Zale
Автор

You know there is already a fine tuned codellama 34b that is beating GPT-4 HumanEval score! Also there is Python version that beats more! Phind/Phind-CodeLlama-34B-v1 and

linuxtechrusgaming
Автор

Firstly, Thanks for the great work you do, sharing the knowledge.
I have been following your videos and was working on a project where i store the embeddings of my pdf data in chromadb. The problem i am facing is i am not able to add embeddings of new pdf file to the stored vector database. And can i also use csv files along with pdf and create a vector database or should it be only similar extension files

Venom-odsl
Автор

My main problems with CodeLlama and other models is that you ask it do you know this standard X. It answers yes and explains what it is but when you then ask it to generate X using the standard (like OpenSLO) it fails miserably.

greenpulp.
Автор

I dont get the program generated by the model in remove duplicates. why don't just "return list(set(lst))" am I missing something??

hackybuilds
Автор

10:50 that chucknorris joke api actually exists with the returned json object's "value" parameter. It would have worked 😂. Wonder how it learned that.

phizc
Автор

How come you only get the Llama models on perplexity?... I get the Llamas 7 to 40b, CodeLlama, Mistral 7b and PPLx (Perpelxity ) 7 and 70b models.

mickelodiansurname
Автор

How do I finetune codellama on my dataset?

kushalavm
Автор

I cant wait for self programing OS's and games.

NotBirds
Автор

WizardCoder 34B V1.0 this fine-tuning seems to beat the original code-llama by a lot

jmirodg
Автор

Great video and informative testing, thanks!

I thought Bing did a nice job of summarizing this important information you may wish to consider on the subject: "According to the official blog post1, the differences in the training between the 7B, 13B and 34B versions of Code Llama are as follows:

The 7B and 13B models have been trained with fill-in-the-middle (FIM) capability, allowing them to insert code into existing code, meaning they can support tasks like code completion right out of the box1.
The 34B model does not have FIM capability, but it returns the best results and allows for better coding assistance1.
The three models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU. The 34B model is slower and more suitable for tasks that do not require low latency, like code generation."

ArielTavori
Автор

Its good to test, but why everybody uses such stupid tdask when tets coding? nobody write snake game of factoreal in real production, maybie fidn some more realistic tasks?

dohua_ai
Автор

Too much benchmarks but in the practice the things are different imo. Is better to make a face to face comparison like your video.

angel_luis