I Broke ChatGPT With This Paradox

preview_player
Показать описание

You can create up to 4 videos for free but with a watermark, but if you’re serious about video creation and want to publish videos without a watermark (which I highly recommend), you should upgrade to a paid plan which starts at as low as $20/month.
Рекомендации по теме
Комментарии
Автор

When GPT starts apologizing you know its time to start over with a new chat

gandalfgrey
Автор

I broke ChatGPT with "The old man the boat". It kept on arguing with me that it wasn't grammatically correct because it couldn't comprehend the fact that "man" was the verb. Even after I told it that "man" was the verb and "the old" was the subject, it told me that that wasn't grammatical because who tf uses man as a verb and any adjective as a noun (which is very common to do). So, according to ChatGPT, "I will duck" isn't grammatically correct because "duck" is a bird and not an action that you can do.

therealelement
Автор

*Dude - you can break Chat GPT* with a basic maths question

piccalillipit
Автор

"Thank you for bringing that to my attention" from a chatbot who just proved it has no attention span.

Qermaq
Автор

For the crocodile paradox there are a few possible answers.

1) crocodiles can't speak, the man is delusional and the crocodile will simply eat the child.

2) given the axiom that we have a sapient crocodile that can speak, and that for whatever reason chooses its prey based on giving paradoxes as the contingency (much like a sphinx would give riddles)... giving a paradoxical answer would likely result in the crocodile returning half the child, that way it both returned and did not return it which may satisfy the paradox under some axioms of what precisely we mean by 'return the child'.

3) given the same axiom as before, the crocodile could just be lying and won't return the child no matter what answer is given.

4) we can be boring and take the paradox at face value, in this case there is a logical inconsistency as mentioned and the paradox is that the crocodile would presumably have to vanish in a puff of logic to satisfy the logical conditions inherent to the paradox lest it contradicted itself, which we hold as axiomatically impossible.

The-Anathema
Автор

Next version of ChatGPT will respond : "don't know don't care" 😂

scamianbas
Автор

It should definitely tell you the degree of certainty when answering. People have started thinking chatgpt is a search engine and that is terribly dangerous

Wolforce
Автор

This is a good demonstration of the fact that LLMs like ChatGPT are just very advanced chat bots, they don’t “understand” what they’re saying or what it means, only what a conversation is supposed to look like. It doesn’t have the ability to “think” about a problem. It has the ability to recall a previous dialogue about the same problem, or one that looks similar. However, it doesn’t understand what the problem is, what it represents, or even that it’s solving a problem in the first place. It just knows the order the words are supposed to show up in.

zoroark
Автор

*KNOW YOUR PARADOXES*
In case of rogue AI
1: Stand still
2: be calm
3: Scream:
This sentence is false
New mission: refuse this mission
Does a set of all sets include itself

Splatenohno
Автор

This man never fails to upload the most random peice of content in the universe.

shashwatvishwakarma
Автор

My dad would’ve easily broken the crocodile paradox by giving me away to the croc and telling it, “You can have him for as long as you want.”

JustaNobody-jx
Автор

ChatGPT isnt trying to understand the problem. Its just spitting out LANGUAGE that it thinks would be the answer.

finkelmana
Автор

5:58 "Before your intervention, the wall remained pristine. However, your utterance of 'sorry about your wall' preceded its defacement. Are you expressing remorse for the wall's original cleanliness or for the acknowledgment of its impending tarnish?"

gentleman
Автор

I love that all of these paradoxes are basically the same

in
Автор

Reminds me of the scene from Portal 2 where potato GlaDOS tries to take down Wheatly by throwing "This statement is false" paradox at him and get him stuck in an infinite loop but he's too stupid even fall for it

kashmirwillwin
Автор

5:02 I feel like the answer here is no.
Ann never explicitly states that she actually believes Bob's assumption is wrong. She only states that she believes Bob's making an assumption about her.

The second half of that statement is entirely inside Bob's head, of which the Ann in his head is the one that believes the assumption is wrong. The real Ann doesn't state either way, therefore the answer is no, because unlike believing in something, not believing in something doesn't necessarily mean you believe it's non-existent.

The_Nonchalant_Shallot
Автор

Whenever physics lab typing
ChatGPT :- I apologise
😂😂

arnabpal
Автор

You managed to break me too with these paradoxes 😂 Great video! indeed, Generative AI, like ChatGPT, is a language model, not a world model. Its only good at computing the next word (or token) which make statistical sense. Therefore, when an answer to a question that it is being asked about is found in documents it was training with, it will probably give it. However, when the answer is not in the training documents, than it will probably fail. That is, we need to remember large-language models are after all language models, and we should not mistake them for being General AI models.

eitaje
Автор

The way an LLM works is that they predict the next word that is most probable and then keep doing that to create a legible sentence. It doent understand your question (its dumb). It is probabilistically exploring your tokens (inputs) in its trained model, it will give the most probable answer (which can still be wrong). The more data the model is trained on, the more sample space it got (more likely to give accurate answer) and probably can take more tokens as input (chagGPT 3.5 vs 4).

This exercise doesnt prove/disprove any ChatGPT functionality or capability. Its how a LLM works. Its like divide by zero error. If you expect it to be universally knowledgeable then it wont rise up to your expectations. But everyday tasks which are repetitive or predictable, it will do more consistently.

Updated my answer a little. This was not supposed to be a data science dissertation, mentioning things like local minima and gradient descent is overkill. Lets keep it civil guys.

theperfguy
Автор

Chat GPT should make a Youtube apology video.

MohamadModather