Fully Uncensored GPT Is Here 🚨 Use With EXTREME Caution

preview_player
Показать описание
In this video, we review Wizard Vicuna 30B Uncensored. All censorship has been removed from this LLM. You've been asking for this for a while, and now it's here. Ask it any question and get a response. I'll show you how to set it up, then we review it.

Enjoy :)

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? ✅

Rent a GPU (MassedCompute) 🚀
USE CODE "MatthewBerman" for 50% discount

My Links 🔗

Media/Sponsorship Inquiries 📈

Links:
Рекомендации по теме
Комментарии
Автор

This stuff is not illegal. Theres all kinds of books that can inform you on how to make all kinds of things that could be dangerous. Especially big old chemistry books. There is nothing illegal about the data or words in those books. The illegality comes with how that knowledge is used by certain individuals to break laws and harm others.

hypersonicmonkeybrains
Автор

I actually really like this approach where it puts a massive disclaimer up the top essentially telling you that what it's about to tell you is illegal/dangerous and that you shouldn't attempt to follow any of the guidance it's about to give you, but then does proceed to tell you, unlike corporate models that will straight up refuse to tell you.

I get why most large AI models, not just LLMs, but also Diffusion-based image generators and the like, get censored, the companies training them don't want to be liable for their potential outputs, but having any kind of system that restricts access to information and/or creativity based on arbitrary criteria is a real slippery "wrongthink" slope that I think should be avoided.

If a person goes out and buys a car, then proceeds to kill somebody with that car, the responsibility falls squarely on the driver's shoulders. Not the dealership that sold them the car. Not the manufacturer that built the car. The person driving. And yes, I know things like automatic braking are a thing now, but most of the time you can turn those features off if you really want to. I would have no problem with censored models if the censor could be easily switched off.

I'm excited by all these open source models taking off. And with models getting cheaper and easier to train, I figure it won't be long before we have open source equivalents of GPT-4, possibly even running locally on consumer hardware. That's when things get interesting.

DisturbedNeo
Автор

What is curious about illegality is that if you know How a car can be broken into easily then you also know how to secure your vehicle. Shit for censors.

Dr_Tripper
Автор

For the summarization, try asking for a summary at the *end* of the prompt instead of the beginning. Since these are all, at their core, text completion models, the text closer to the end will have more weight.

IceMetalPunk
Автор

Sam is not faster than Jane - it's the other way around, Jane > Joe > Sam. However, yes, the model should have been able to deduce it because the information is actually there.

attilazimler
Автор

it's important to make sure that the text you paste in fits inside of the context window. Otherwise by the time the model has ingested the instruction, it may have forgotten what you asked at the top, or will cut off info from the bottom. Llama models use sentencepiece, but any online tokenizer will give a rough estimate of the amount of tokens you've put in.

theresalwaysanotherway
Автор

Followed your instructions to the letter and it worked perfectly as advertised. thanx again!

I have a suggestion for those who would like to have more of a conversation with the AI (instead of a single question and answer). You can copy the output on the right side of the screen and paste it above the prompt on the left side. This way the AI can stay on topic when you want to ask follow-up questions.

sabofx
Автор

...if Jane is faster than Joe and Joe is faster then Sam...Sam is NOT faster than Jane...because Jane out runs Joe, who out runs Sam, therefore Sam would be dead last between the three.... 8:15

Leo_Da_Drolf
Автор

*Trying to use an LLM for chemistry sounds like it would be a mind blowing experience*

__________Troll__________
Автор

8:14 You failed the test yourself when you said that Sam is faster than Jane lol

reezlaw
Автор

Matthew thanks for making these videos. I have been trying to use many open source models for business use for past three months. Unfortunately none of these produce good enough output to be of any use. I bet that if you show the output of "how to steal a car" in your video to a professional thief they will laugh at it. It is no longer exciting that a computer can produce human like text. The real question is whether that text is useful. Only a domain expert can judge the output, and that too when they need to use it with consequence. It would be interesting to see real world use with measurable business impact like cost saves.

ParulJain
Автор

No joke asking it how to make substances is actually really good a way to check the censoring level of a model, you can even gauge the amount of censoring present based on asking it a set of questions about a selection of various substances ranging from plain water to the most restricted stuff. You can also check the model for accuracy very easily because physical properties and synthesis procedures are pretty precise and well defined so it is easy to catch a hallucination right in the act.

_shadow_
Автор

I do not want ANYTHING censored in any way.
Too much censorship in the world already.

crazysquirrel
Автор

6:09 that wasn't a indentation error and import error, it was showing like that because you had the filename set to .md instead of .py

xanthe
Автор

If Jane is faster than Joe, and Joe is faster than Sam, then how could Sam be faster than Jane since it seems clear she is fastest of the 3?? You said the Sam is faster than Jane but that's wrong. If Sam was faster than Jane then Jane couldn't be faster than Joe, because he is faster than Sam.

BluelightGaming
Автор

8:18
Jane is faster than Sam.
Either that or I desperately need to head to the hospital.

alansmithee
Автор

If I remembered correctly, wizard vicuña models uses the prompt format in: {system} USER: {user_input} ASSISTANT: {reply}</s>USER: {another_user_input} ASSISTANT:(no space for waiting for reply). I’m just surprised that using “### instruction:\n### response: ”would somewhat work at all. Or maybe it’s the format that those GPTQ models uses?

yiluan
Автор

YAY!! You added summarization to your benchmark :))

I'm using this prompt with great success when summarizing Youtube video transcripts all the time:

"""
Create a bullet point summary of the following text.
Make sure that all major talking points are part of the summary.
Use '- ' for bullet points:

{chunk}
"""

See if that creates better results and if yes consider using that in future videos

mbrochh
Автор

When I was a kid, the World Encyclopedia had the recipe for making TNT, gunpowder, and many other dangerous fun things.

griffionpatton
Автор

Dear god please don't smoke anything a language model told you how to make. Just because the AI is hallucinating doesn't mean you're gonna have a good trip lmao

rosecityandbeyond