DeepSeek v2.5 (Full Test): Can This Open-Source LLM Beat GPT-4 in Coding?

preview_player
Показать описание
An open-source model that actually deserves the hype—it’s not only great for general use but is currently the best open-source model for coding. It ranks second only to GPT-4 Turbo and is on par with LLaMA 3.1 70B. So, let's check it out!

My Pc Parts 🖥️

00:00 - Deepseek V2.5
00:52 - EvalPlus Rank
01:23 - Deepseek 2.5 benchmark
02:45 - Test Questions
03:12 - Deepseek Math Test
0:4:26 - Deepseek Logic and Reasoning Test
07:26 - Deepseek Coding Test
11:34 - Deepseek V2.5 vs Llama 3.1 70b
12:20 - Deepseek V2.5 Analysis
13:51 - Api access and price
14:20 - Deepseek V2.5 Deserve a hype

💷 50% Discount Code : A2LH6LZ

#deepseek #llama3 #opensource
Рекомендации по теме
Комментарии
Автор

Thanks for the update....I'm checking it out now and it's pretty cool...

AaronBlox-ht
Автор

is it can run locally with mac studio RAM 192 GB ?

เมฆาเตียวัชรานนท์
Автор

What hardware do we need to run that DeepSeek v2.5 model?

Maisonier
Автор

I think we need to move away from using snake and life in benchmarks. Everyone tests for these and model developers know to train their models for this. We need more advanced coding tests.

mdubbau
Автор

The smallest version is about 50 GB. How much RAM does it need?!

QorQar
Автор

يا عمي تحيا مصر و تحياتي لك
بجربه بعد شوي ان شاء الله يكون رهيب مثل الي بمقطعك.

AbdoZaInsert
Автор

Hahaha all the beepol. great vid and i find tour accent soothing lol idk why

garchafpv
Автор

what resources would you need to run this locally with ollama? like what ram/vram?

delta-gg
Автор

Are you Egyptian? Your accent is so familiar

ahmedgaber
Автор

The indian accent feels like nails on vhalk board to me. This is notnthat at all

garchafpv