Can ChatGPT solve the world's hardest puzzles?

preview_player
Показать описание


Chapters:
0:00 Intro
0:21 Easy riddles
0:58 Jane Street puzzles intro
1:27 Puzzle 1: The Hidden Warning
3:30 Puzzle 2: Robot Tug of War
5:57 Puzzle 3: Single Cross
8:39 Conclusion
Рекомендации по теме
Комментарии
Автор

This guy talks to ChatGPT exactly the same way interviewers do during my technical interviews xd

sanjey-wwjn
Автор

You didnt even have to go this hard. Ask GPT 4 to solve a simple ceasar cipher. You can even tell it the exact letter shift and it will still fail to apply it.

nurichbinreel
Автор

Thought this was like a 50k subs channel, only 148? Greatly underrated

IHaventDiedYet
Автор

It produced text that looks convincingly like an answer to a logic puzzle, which is exactly what it's trained to do, so 10/10.

Viniter
Автор

4:09 FIY it didn't bug out, it ran out of tokens for the answer. You can tell it "continue" or "go on", and it will go on with the answer!

FAB
Автор

What's funny about all these "Coded an entire website using ChatGPT" is that 1. It's not really an entire website. It's just basic stuff. Unless you count some one page site with some simpler functionality and buttons an "entire website"... And 2. There were plenty of corrections before ending up with whatever was made.

makesnosense
Автор

This channel is destined for big stuff

sandoh
Автор

ChatGPT can't solve anything because it doesn't understand the meaing of words. All it does is pattern matching and probability models. The answers to simple puzzles come out probably just because the training input already had them and the bot correlated these answers with the questions in the input.

Xeverous
Автор

It's only good for stuff that do not require too much thinking or calculations, if I told it to give me certain logic in code, it was only able to do it for very common things such as a levenshtein distance alg, but not for something lesser known

kipchickensout
Автор

ChatGPT is a word predictor with a bias towards attempting to match the current conversation context.

The more you try to correct it in a conversation, the more tied up in the context it gets.

Don't correct the bot with further conversation, edit your statements or restart the conversation entirely

orterves
Автор

Kinda feeling happy for being a subscriber before 1k, you'll do great if you keep at it!

xrayian
Автор

A one story house with a basement is still considered one story. Only above grade level counts. Edge cases are everywhere.

Tug of war: You might be able to convince ChatGPT that the correct answer is incorrect.

NuncNuncNuncNunc
Автор

I was trying to concentrate on the puzzles, but I kept getting distracted by THE LICC

colouredmirrorball
Автор

ChatGPT is a language model based thing. Don't expect it to understand problems that fall outside of the scope of basic logic and language comparison.

mcwolfbeast
Автор

If anyone wants some peak humour:
Ask chatgpt to draw an ascii art of yoshi.

You’ll be surprised at what you find

GeoRoze
Автор

that LifeAdviceLamp "Buy Lottery Tickets" tweet, is the king of the city of my heart

herzogsbuick
Автор

The issue is that you found these tests online. ChatGPT has scanned the internet so it can get many of the word riddles. Some of them it just doesn't know what you're asking.

ryanm
Автор

I created a simple stenography challenge for ChatGPT, which only required 5 steps to uncover my pseudo Google account details. The bot could not solve it. I used no password encryption, only standard ASCII reversal, encoded to binary, then to Base64. I then advanced every 3rd character in the Base64 by 1. I then added the resulting string into metadata in a standard jpeg file depicting a red rose. It would have been cool if the AI could have uncovered the hidden data. Perhaps we still have far to go before AI can achieve this. ?

lancemarchetti
Автор

I asked if ChatGPT knows Bulls and Cows game and suggested to play it. Bot thought a number and I had to guess it. After the third answer limitations of a bot that just tries "to continue a sequence of words with the most probable candidate" became very obvious)) Answers were inconsistent and when I pointed out inconsistency it agreed about mistake, but the new answer it gave was as inconsistent as previous. To sum up, when ChatGPT saw something similar to a problem in a training set as I believe was the case for the single-cross problem it can produce wonderful results, but do not expect real reasoning from it.

perelmanych
Автор

Man I would die on the hill about seed being an early stage of a plant.

asdfssdfghgdfy