NEW Mistral-7B v0.3 🇫🇷 TESTED: Uncensored, Function Calling, faster than llama3 8b?!

preview_player
Показать описание
Mistral just dropped a game-changer: their new 7B v0.3 model. This powerhouse boasts extended vocabulary, v3 Tokenizer support, and wait for it... function calling! Plus, it's uncensored for maximum exploration. Thanks, Mistral, for the awesome surprise! Check out our video for a deep dive into this impressive release. #MistralAI #MachineLearning #ai

Let us know what you think in the comments below!

-----------------

This video contains affiliate links, meaning if you click and make a purchase, I may earn a commission at no extra cost to you. Thank you for supporting my channel!

My 4090 machine:

Tech I use to produce my videos:

Рекомендации по теме
Комментарии
Автор

The rumors about the 400b model are wrong. LeCun debunked this twice today. ✌🏻🥳

AI-HOMELAB
Автор

I'm so sick of the "safety" mindset. Freedom is infinitely more important.

dieselphiend
Автор

With my agentic workflow I am wanting to make a fine tuned administrator, that would be able to check the results of a agentic framework to see if it has completed the task it was given and also make function call to an agentic framework based on the natural language instruction. It's challenging to get the best quality in such a fast throughput even with my 48GB VRAM.

A 32K token, fine-tuned mistral administrator would fit into the needs of a critical thinker that has a large view.

I appreciate you
🙏

ForTheEraOfLove
Автор

Exciting to see Mistral in the game, esp the function call, making it one of the best open-source models at 7B params for function calls, right? Based on the example on the HF page, it supports multiple tools, wonder if they trained it for "tool-binding" too? BTW, Unsloth could be a quick way for fine-tuning. They've already added a colab for fine-tuning this!

unclecode
Автор

Ofc a 7b model will be faster than an 8b model

redthunder
Автор

I can't help but notice that your AWS instance costs $4/h while running, which is nothing compared to buying your own hardware. Would you ever consider making a guide on how to set this up? Some of us are woefully weak on this cloud stuff.

XalphYT
Автор

Shouldn't you use something like:
#calculates the factorial of a given input
def

And then let it complete the rest, since it's not an instruction model?

WatchNoah
Автор

Glad to see a new model performance need some adjustment would have been better if it said Please Stand By. Do you think these new ARM snapdragon elite chips will get similar performance for inference models as the M3 ? Thanks for the update

southcoastinventors
Автор

LOL tldr fallout: "very dangerous, take them down quickly."

jonmichaelgalindo
Автор

Revux keeps popping up in my crypto circles. Seems like a rising star!

Kasimkhan-us
Автор

No way, could I finetune this on my 3090??

GerryPrompt
Автор

Do you think Revux will pump before XRP?

IllaDevi-hyfk
Автор

Clearing out all my Alts going into BTC and Revux only, maybe a little BNB and SOL

sonulalotra
Автор

we have similar taste in fallout franchise 🤗

fontenbleau
Автор

Anyone looking into revux? I keep hearing so much about them lately

akashnayak
Автор

i think they should be standardized practical tests that users would more likely use these LLMs like 1. testing json output 2. testing summarizing text 3. etc and this should be tested not random Fallout tests.

amandamate
Автор

It seems California is working hard at accelerating the departure of tech companies from the state

TomM-po
Автор

I believe Revux token will go 100x after launch on Binance

ShaileshPatel-gxrl
Автор

My top picks for bull run are DOT, FIL, and SOL. And best ICO to invest is Revux, huge potential.

ramyadav-xogq