New Super Resolution AI - Enhance ~10x Faster!

preview_player
Показать описание

📝 The paper "Deep Fourier-based Arbitrary-scale Super-resolution for Real-time Rendering" is available here:

📝 My paper on simulations that look almost like reality is available for free here:

Or this is the orig. Nature Physics link with clickable citations:

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky,, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.

Рекомендации по теме
Комментарии
Автор

To explain: Unlike current super resolution solutions like Nvidia DLSS or AMD FSR2, these new techniques (2023 FuseSR, this 2024 paper) do not just scale up the rendered frames a bit from a medium resolution to a somewhat higher like 720p to 1080p. They instead use the full (target) resolution (unshaded) texture and geometry data from the G-buffer and combine it with a very low resolution fully pixel shaded frame. E.g. 1080p geometry + 1080p textures + 270p fully shaded frame = 1080p fully upscaled frame. So the AI doesn't have to "guess" texture and geometry edges while upscaling, only shading effects, like shadows.

Since pixel shading is a very expensive step, but most high frequency details are in the textures and geometry, I think it makes sense to separate the scaling factors here.

That being said, it seems these techniques are still somewhat slow (more than 10 milliseconds per frame apparently, though I don't know the evaluation conditions), so they probably won't replace DLSS & Co just yet.

cubefox
Автор

I’m watching on 144p and I’m impressed

Mkaltered
Автор

Now even more hardware requirements can be crammed into blurrier and lower resolution images with a ton of motion blur
What a time to be alive!

Napert
Автор

No more blurry videos of UFOs and other strange phenomenon, finally!

IAm-nd
Автор

Looks promising, Thanks Karen, Jonah and Fahir

Goood
Автор

Ah sweet, another excuse for the AAA studios to throw out the little remaining optimization out the window

그냥사람-ef
Автор

From the looks of the second scene, the technique seems to be creating extra detail based on external data. The warning sign on the second column started appearing (probably duet to LOD) in the LR example and the Ours example had it visible from the get go. The third column did not have the sign present in the LR at all, however Ours version has it. The sign is either repeated due to the sign on the first column, or it's existence on the further columns is inferred from external data.
Don't get me wrong, I believe that an upscaler that uses the uncompressed game assets to perform more accurate upscaling is a great idea, however it doesn't seem to be a pure upscaler.

Telhias
Автор

If I understood this correctly this is not going to work on 2D Images, but only for something that is rendered on a GPU because it needs the G-Buffer?

RiPvI
Автор

Finally, the phrase “enhance the image” in movies will make sense.

meemdizer
Автор

Wow, it even knows where to place warning signs on the wall 6:29 :)

MrAndidorn
Автор

0:15 wow :) I need it in the form of a box with an HDMI ports for my console with 20, 000 retro games... :) Where to buy it? I'm joking of course, but maybe next year? Who knows?

tomaszwaszka
Автор

i am going to point out that in 1:29 it is hallucinating the shape of the grass. it was short and it changed it to be long.

also the examples presented were all pretty flat scenes (the eastern village scene was the best as you said but was also very boring texture wise, it just contained some flat colors with no other real effects going on ). this is to be expected because if you were to have actually complicated materials with intricate textures you would be asking the ai to figure out information it has no access to.

don't get me wrong, the tech is really cool but i don't really get it. this can only really reliably work without artifacts for somewhat simple scenes were the ai can do less guesswork (if you are missing the pixels from a matte gray wall you can figure out that they are also just gray). issue is that we can already render such scenes with great ease, this would be useful for really complex scenes i would imagine but such scenes will probably always be artifact galore because you are asking the ai to basically guess, even if it is great at guessing it will make mistakes because multiple options might be reasonable.

τεσττεστ-υθ
Автор

Train video game to eye tracking -> model to prioritize focus -> super super resolution. I thought I remembered something like this on this channel before?

ibgib
Автор

Even less optimization for game performance due to lazy development options and blurry smeared games!
What a time to be alive!!

Impressive tech but not getting the desired result.

dlaiy
Автор

There could be different upscalers per game. Specifically trained with only that game.

For example, GTA6 train their model with 8k and 720p versions and map those data together. In consumer GPUs they only need to run at 720p but upscaler upscales 8k with perfect clarity and correctness without loosing any details. This way GTA6 can achieve 8k resolution with just 1060TI gpu (which mostly used for physics calculations)

emircanerkul
Автор

THX again man!
I would like to see more examples of enhancing and upscaling video footage.

peffken
Автор

enhance resolution: CSI is no longer a joke

brololler
Автор

You should make a video on the new physics simulator "Genisis". I think it's really impressive.

someoneidk
Автор

Wow. Hats off to the team at Nanjing University!

selohcin
Автор

bro AI is going wayyy to fast, its actually insane. Kinda glad im alive to see this, but boy the future is very unpredictable now

uletea