Creating Photographs Using Deep Learning | Two Minute Papers #13

preview_player
Показать описание
Machine learning techniques such as deep learning artificial neural networks had proven to be extremely useful for a variety of tasks that were previously deemed very difficult, or even impossible to solve. In this work, a deep learning technique is used to learn how different light source positions affect a scene and create ("guess") new photographs with unknown light source positions. The results are absolutely stunning. The promised links for artificial neural networks follow below.

__________________________________

The paper "Image Based Relighting Using Neural Networks" is available here:
Disclaimer: I was not part of this research project, I am merely providing commentary on this work.

Recommended for you:

Music: "The Place Inside" by Silent Partner

The thumbnail background was taken from the paper "Image Based Relighting Using Neural Networks".

Károly Zsolnai-Fehér's links:
Рекомендации по теме
Комментарии
Автор

I consider ML today's equivalent of sorcery. This is absolutely astonishing. I love how the "failure case" yields convincing caustics.

thomassynths
Автор

So well explained! Its a pleasure to listen to your Two Minute Papers. Please continue this great work.

rodesicom
Автор

I love computers in general but I hate how there's so many fields of study within them, I love computers, I love numeral networks, algorithms, the science of computer hardware and anything that has to do with programming or computer, I just don't know what I want to stick with though and it makes me sad that I can only live long enough to only learn a limited amount of things :(

elmonster
Автор

That's very good. The failure on the drinking glasses was a hidden variable in the curvature of the dimples in the glass which will probably only show up when the light is at certain angles.

pinchopaxtonsgreatestminds
Автор

such awesome analogy when u mentioning "asking many doctor to diagnose a problem"

wongkohping
Автор

I wonder how fast the algorithm is able to render the frame? It did a great job rendering the caustics, and if fast enough, it could be a great tool for rendering CG lights and shadows.

mbunds
Автор

Very neat stuff! I wonder if we'd ever get NNs to be fast enough to be run in real time. I.e. it would be interesting to have something like an entire game's level be precomputed from a small sample of viewing angles and lighting conditions where the NN does an almost exact computing of the result practically instantly as if it was ray-traced until fully cleared up.

Kram
Автор

What is most amazing about this work is that it doesn't rely at all on understanding anything about the reflectance characteristics of any of the materials within the scene. For example it seems to some how infer the correct way to relight surfaces which have various types of anisotropy.

It also...astonishingly is able to correctly relight transparent materials ..without knowing the geometry of the objects? how the F??

How the hell is it doing that without characterizing the actual surface in the object in the scene? It would seem that simply having example shots would not allow this to be done accurately yet there as they moved around the light source on those metal gears the anisotropic specular reflectance moved about in a realistic fashion.

That is the part that had my mind completely blown.

Some how the neural network is inherently learning to characterize material properties like anisotropy from photographs.

Thinking about it it would make sense that having example photos of light shining on a surface would allow inference of anisotropic surface characteristics but my brain is finding it hard to wrap around how that would work.

This is really incredible. I wonder if this technique could be harnessed to do with space what is now being done with light, for example a given fixed image has the light dynamically shifted and the lighting of objects interpreted based on neural nets guesses can't the same thing be done through a volume and thus allow full spacial navigation of a scene give a few lighting examples and starting geometry?

So just as you use the NN to interpolate light reflectance and transmittance using a few exemplars you let it also learn how to interpolate geometry in a scene so that it could change viewing position and lighting without explicit calculation of surface reflectance functions.

One could imagine this being a super efficient means of enabling global illumination for rendered scenes by leveraging a network to interpolate how objects would look from new positions AND under different light sources... the costly computational process involved for rendering today where surface BRDF's and volume SSBRDF's need to be computed would go away.

Assuming I am understanding what is going on here....I downloaded the paper and will be reading...I am shocked I didn't come across this paper 2 years ago when it was released!

DavidSaintloth
Автор

In future videos could you stress the distinction between training set and test set? I struggle to understand how likely is it that the network is overfitting in this experiment...

nivwusquorum
Автор

Imagine using this kind of tech for relighting a scene in a photo. This would give photographers a completely new way of making eye candy!

chocoprata
Автор

That plant is going "Why is there a water bottle next to me when I'm so thirsty?!" But the machine learning is amazing.

mutatron
Автор

Wow 2000 subscribers is quite a lot. Congrats!

Tymon
Автор

Very cool. Please keep up the great work. It'll be interesting to see what happens when we begin using these techniques to model what we can not feasibly accomplish in the real world.

apoc
Автор

Nice. Is there or will there be an online demo?

PeterSwinkels
Автор

Also, did you try the same approach on camera moving around?

lostmsu
Автор

Can one apply this to real-time lighting in games?

lostmsu
Автор

The network is shallow with 2 or 3 hidden layers as described in the paper! I would rather call this work as "Creating Photographs Using Neural Networks"

jeongjoonpark
Автор

OK: "This is what it looks like."
"What" is the object of the verb looks like (to look like = to resemble).

OK: "This is how it looks."
"How" is an adverb modifying the verb looks (to look = to appear in this context).

NOT OK: "This is how it looks like."
This is not grammatically correct.

It can look like a thing,
or,
it can look a certain way.

gruppler