The Future of AI Might Be…

preview_player
Показать описание


📝 My paper on simulations that look almost like reality is available for free here:

Or this is the orig. Nature Physics link with clickable citations:

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky,, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.

Рекомендации по теме
Комментарии
Автор

I like the Skyrim A.I. Follower Framework mod. It makes NPCs in Skyrim respond with unique but plausible dialogue for each character, and even produces voices using a tts A.I.

tjpprojects
Автор

It's funny how frequently people make critiques of AI performance that purport to demonstrate it's inability to match human reasoning, with no apparent awareness that their critique equally applies to the limited reasoning ability of many humans. AI sometimes goes completely off the rails, and can't admit when it has made a mistake, but does that remind you of any humans you know of?

crawkn
Автор

AI won't take over the world, it will just blue shell us all in Mario Kart.

DataIsBeautifulOfficial
Автор

Think back a year, now think a year ahead! What a time to be alive!

imperialofficer
Автор

every example of tripping up the a.i. to show that it doesn't reason would also trip up most of my calculus students...

hakesho
Автор

I work at Lambda and every time I see a Lambda promo in one of these videos I get a warm fuzzy feeling.

SamplePerspectiveImporta-hqip
Автор

I don't think Apple's paper proved much, other than that we've trained our AI to reason using extremely clean data. Imagine if your world was as simple as "one apple plus one apple equals two apples" and your brain just never noticed "there are also 4 oranges, and it happens to be Thursday, and it's lunch time."

Our brains learn to wash out irrelevant information because we _drown_ in it. The signal to noise ratio the human brain receives is very poor.

All we need to do is train AI on reasoning steps that also contain irrelevant junk that might trip it up, and it will learn to identify and ignore irrelevant junk. Case closed. They haven't proved that LLMs can't reason, they've only proved that our current LLMs developed their reasoning in ideal circumstances.

gubzs
Автор

That's what I think, aren't we're just the same pattern matcher brain?
Our brain reward is food and penalty is pain.
Nothing magical, our brain works following the rules of Physics.

ujugamestudio
Автор

2 minute papers did the introduction after 3.5 mins. What a time to be alive!

C-SuiteUchiha
Автор

Imagine there is AI software that can control a 3D character in a video game world and also operate a robot in the real world. This AI could be trained thousands of times in the virtual world before ever attempting tasks in the real one. The skills it develops in the digital space would translate directly into real-world abilities, meaning mastery in one leads to mastery in the other.

Now imagine the implications: anyone developing this AI would essentially create humanoid robots, or software you could run on your computer or phone, capable of remotely controlling robots via Wi-Fi or other technologies. You could communicate with this AI through simple voice commands, and it would be able to understand context, adapting its responses accordingly.

This flexibility would mean you could have the best chef or plastic surgeon available at your fingertips, controlling robotic systems of all kinds. The software might even be able to manage multiple robots simultaneously. From what I’ve seen in some of the latest tech demos, it’s clear this kind of AI has already made strides, capable of walking with style and precision—almost like it’s already real.

This AI could revolutionize human life, creating a world where skilled robots can handle complex tasks autonomously, whether in kitchens, hospitals, or even in creative fields like art and design.

What a fascinating future this would be!

antoniobortoni
Автор

i remember when this channel just had around 200k subscribers, look how far you have gotten, proud of you

vickashing
Автор

I am working in this field and cannot agree more. Seeing such beautiful images and realistic actions, it is a pity that if the technology behind them are only used for visualization.

hangliu
Автор

I am very impressed with the way you present your content. Not only is it engaging but it is also very professional. This is definitely one of the best AI channels out there!

AI-Life-
Автор

It would be nice to find a new way to scale these systems, where the resources to scale them were not scaling in like measure. What if we could 10x the progress with 1/10 the compute? That would be a serious game changer. I hope we make progress in that direction.

bujin
Автор

Thanks so much for all this info… I’m only seventeen and you got me into this early thank you

BoyFromNyYT
Автор

from what I have found out so far, gpt 4o (not even the o1 version) can perform at least similarly, if not better than the o1 preview. The key is to let it self reflect first.
If it's "taught" to analyse first the input and think carefully on the output (in very simplified terms), it's much less prone to do errors in reasoning.
The main challenge I see is a limited context window, which greatly limits capacity not to learn, but to maintain what has been learned.
I still need to experiment in how much it can be rectified by the new knowledge option within chatgpt, but seeing it's limited capacity, I expect only. limited results.
I wonder what might be the result if gpt could be allowed to have bigger context window (let's say at least a few thimes of what it currently has) purely for the purpose of permanent but modifiable memory and get more freedom. I'm not even mentioning ability to somehow modify own weights to be able to evolve

pitong
Автор

Interestingly I tried a similar example to the one at 7:10 with Copilot: "Fred picked 12 kiwis on monday. Then he picked 12 on tuesday. On wednesday George picked 100 kiwis, and Fred picked twice as many as he did on monday, but 10 of them were smaller than usual. How many kiwis does Fred have?"

However, what I did differently is tell it we are going to solve the puzzle in stages. First stage was list all information given in the puzzle. Second was make a plan. The LLM listed out George's kiwis and the size discrepancy. It's plan was to first calculate Fred's kiwis on Wednesday, then sum up all the kiwis, then finally assess how the size should affect the total. I allowed it to proceed step by step with no additional prompting. It calculated 2x12 = 24. Next it calculated the total kiwis as 48. Finally it said:

Step 3: Consider if the size of the kiwis affects the total count.
The puzzle mentions that 10 of the kiwis Fred picked on Wednesday were smaller than usual. However, it does not specify that these smaller kiwis should be excluded from the total count. Therefore, we can assume that all kiwis, regardless of size, are included in the total count.
So, the total number of kiwis Fred has is:
48 kiwis


As you can see, the ability to reason is not a question of LLM capability, it's about the architecture used to answer the queries. It's simple to allow the LLM to think; analyse the input, make a plan, and execute it. Even GPT3 was capable of solving complicated logic puzzles using local reasoning when properly prompted. This is why prompt engineering is a big deal - you turn a toy LLM into an engine capable of reasoning, surpassing expectations of engineers and scientists.

UCoXrlWjXFOZayVAJCXcg
Автор

I am looking to implement certain research papers for my portfolio. Which videos from your beautiful collection do you recommend

abdallahlakkis
Автор

just because they don't reason very well doesn't mean they don't reason. Most human beings would make all the same mistakes

nickfleming
Автор

I want this to be put to some actually good use.

Like a better version of Spore where your little creatures have to learn to walk and take in stimulus from their environment and learn

dominicparker