Install Mistral 7B Locally - Best OpenSource LLM Yet !! Testing and Review

preview_player
Показать описание
In this video we will install Mistral 7B LLM locally and then run benchmark testing on it to see how it performs across various use-cases.

#llm #mistral #benchmark #largelanguagemodel #opensource
Рекомендации по теме
Комментарии
Автор

Great information! As a beginner, I learned a lot and I will watch your other videos 👍

ithepot
Автор

First I want to thank for sharing the useful AI content.

The LM Studio software was a key step to bring AI assistants a step closer to the customers and consumer.

I made use of the software as well and was recently experimenting with dolphin mistral llm 2.2.1 and wondered after a while what the token count 4984/2048 at the bottom right below the chat input means. As far as I understood, it's some sort of counter how many tokens the llm already has written and answered, but why does it matter? Is the chat history fed into the language model each time we enter something new, and this happens somehow behind the scenes? When these language models are working like this, I would understand that the natural limit of the input the language model supports also is the maximum size of the chat history.

I am not very familiar with LLM s and just started experimenting with them. Could someone please explain why the token count: yxcd/yxcd number is there and how it affects the Assistants' performance or affect the chat in which way?

Thanks in advance

abdussamed
Автор

The whole point of question 6 was to see if it fell into the trap of answering 16 hours, instead of the actual right answer, which is 4 hours. This is absolutely not a pass. And it also totally failed question 7 (not part of the benchmark)

thomask
Автор

I don't get it. I thought Mistral was supposed to be uncensored?

borisrusev