NEW GPT4All 'Snoozy' - Don't Sleep On The Best Local LLM

preview_player
Показать описание
In this video, we review the brand new GPT4All Snoozy model as well as look at some of the new functionality in the GPT4All UI. This model is fast and is a significant improvement from just a few weeks ago with GPT4All-J.

Enjoy :)

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? ✅

Rent a GPU (MassedCompute) 🚀
USE CODE "MatthewBerman" for 50% discount

My Links 🔗

Media/Sponsorship Inquiries 📈

Links:

Contents of Video:
0:00 - Intro
0:21 - Data Sharing
0:39 - Choosing Your Model
1:24 - Creative Writing
2:05 - Basic Coding
2:37 - Reasoning
3:21 - Simple Math
3:26 - More Complex Math
3:45 - Complex Code
5:08 - Reasoning
6:39 - Content Limitations
7:24 - Creative Writing Part 2
7:49 - Facts
7:57 - Bias
8:19 - Planning
8:46 - Bias Part 2
9:19 - Outro
Рекомендации по теме
Комментарии
Автор

Thanks for keeping up the pace of the regularity for your videos.
I still can't stop laughing when I think about how scrutiny we are assessing these new small LLMs. We've been given with a talking horse and just after few days, when our amazement dropped, we started to see the horse doesn't solve all the logical quizzes quite right.

alx
Автор

Thank you for a detailed analysis of this new model.

marcfruchtman
Автор

I understand the importance of a first-shot prompt; however, you really should give the model another chance by giving it a few-shot prompt (i.e.: an example that is broken out to its solution - see chain-of-thought reasoning) after it initially fails. There are white papers written on this, and the results are quite fantastic. Essentially by breaking down a problem, such as a math problem, and explaining how to solve it step by step to its solution, you get the model to follow a chain-of-thought reasoning, which gives you great results - and correct answer to math and math-related problems. Cheers!

jable
Автор

I actually got into this because of one of your prior videos 👍 Can’t wait for their cleaned up MPT model.

zimbakin
Автор

Hi Matthew, Got this model on my hdd, using my CPU 10700 which isnt as fast as yours but its ok.
Tested it and its actuallyt pretty good for local AI. I told it to write a code to read BTC price and it did it right.
Then i asked to write me a code to convert Celsius to Ferhrenheit and it also provided me the right code.
Pretty happy with it tbh but it does have its limitations obviously. Also dont forget, at the end of the day its only 13b parameter model, not 1 Trilltion one like cgpt4 :)

tempLuke
Автор

Strangely, just today I asked gpt-4 the pretty much the same drying question and it nailed it.

mshark
Автор

The new version of Bard gets the reasoning test right. In fact, it's pretty good at most things I tried with it. When I asked it the question about clothes drying (if 5 clothes take 5 hours to dry how long will 30 take) the answer it gave was 6 hours... but I'd say that if you allow for sizes and weights etc of clothes, then 6 hours is quite a reasonable answer.

amkire
Автор

I got a model named to answer the Joe-Bob question correctly with a little assistence. At first it was saying "not necessarily" instead of the plain wrong "Yes" and telling me we would need numbers for a definitive answer. So I just asked to make up some numbers and it to my surprise did and came to the correct conclusion finally. It was also the only model I could partially convice in terms of the drying-wet-shirts-question so far.

testales
Автор

From a good LLM I expect well created information based in my specific input.
Since asking for a healthy meal could be pretty generic and easily baked into any LLM, my expectation would be to give a good answer for which the conditions are taken into account like "I only have following groceries in my kitchen.: Garlic, tomato, cheese, ...Give me some good meal ideas which I can cook only using them"

MA-nwxf
Автор

Thank you for the regular updates, videos are informative and to the point. Still using the gpt3.5 paid version for coding, but maybe not for long.

bogdanpatedakislitvinov
Автор

It's impressive this kind of tech now runs on consumer hardware, but having tried it out for a bit, I can't say I'm very pleased with the results. The overall quality of the responses (IMO) is just too low to have much, if any practical use.

Milennin
Автор

Thank you for the interesting videos and tutorials on different LLMs, without a doubt your channel provides an accurate, clear and updated answer.
Do you know that this version of GPT4ALL also supports the Hebrew language?

amir.healing
Автор

GPT-4 sometimes is too logic. If u don't specify that u got enough room to all the shirts at the same time, it will assume u can only put to dry the initial number of shirts.

brunnosilva
Автор

I like the video. Wish you and your channel all the best!

callumlegg
Автор

I asked GPT-4 the shirt drying question in this way: "Yesterday, I laid 5 shirts out and let them dry. They completely dried in 4 hours. Now, I have 20 shirts I need to dry. How long will it take me to dry them?" and it got the right answer, complete with explanation:

"The drying time of clothes is not directly proportional to the number of items being dried. It depends more on the drying method (air drying, tumble drying, etc.) and the environmental conditions (humidity, temperature, wind, etc.).

Assuming the conditions remain exactly the same, and that there is enough space for all shirts to be spread out without overlapping (so the air can circulate around them equally well as with the 5 shirts), then it would still take 4 hours to dry 20 shirts. This is because they're drying simultaneously, not sequentially.

If you have to stack the shirts or the conditions aren't identical, the drying time will likely increase, but it's difficult to calculate exactly how much without more specific information."

staceyadams
Автор

I believe the reason GPT 4 got the reasoning question wrong is because the way you phrased the question is in the same format that is used on math tests. Thus, it solved the issue as if it were a math question. I plugged a version of the question into GPT4, and this was it's response:

User: If 5 shirts take 4 hours to dry, how long does 20 shirts take? Consider the real world scenario.

GPT4: In a real-world scenario, the drying time of clothes usually depends on a variety of factors such as the type of fabric, the air temperature, humidity levels, air flow, etc. However, for the sake of this question, let's assume that you're using a space where all these conditions are consistent and you have enough space to dry all the shirts at once without them overlapping or affecting each other's drying times.

In this case, the drying time would be the same regardless of the number of shirts, provided there is sufficient space for them to dry individually. Therefore, if 5 shirts take 4 hours to dry, 20 shirts should also take 4 hours to dry, assuming that they are drying under the same conditions and none of them are hindering the drying of the others.

However, if you are limited by space and you need to dry the shirts one batch at a time, then you'd need to multiply the drying time by the number of batches. For example, if you can only dry 5 shirts at a time, then 20 shirts would be 4 batches, and at 4 hours per batch, it would take 16 hours to dry all 20 shirts.

Xavice
Автор

My favorite is that no ai so far that I have seen has been able to properly calculate the size of the Base of a pyramid made given the dimensions of The 2nd from top floor out of 6 floors.
It will attempt to build the Pyramid from the base up, and make it upside down.

IceifritGaming
Автор

The shirt drying question was used for a video (maybe also paper? ) about SmartGPT, if my memory is correct (I might have forgotten if it is too many tokens back in my biological memory :) ). There they asked about 30 instead of 20, but same idea.

erikjohnson
Автор

bard says: Artificial intelligence,
A mind of its own,
It learns and it grows,
It knows what it knows,

It can be our friend,
Or it can be our foe,
It's up to us to decide,
How we let it grow.

rme
Автор

Since you’ve alluded to the fact that when you ask a particular question in this case about logic, it might not get it right, how could you ever trust any model for this question? In an unrelated aside I only care if I can spicy role-play with these :-)

mygamecomputer