Understanding Implicit Neural Representations with Itzik Ben-Shabat

preview_player
Показать описание
In this episode of Computer Vision Decoded, we are going to dive into implicit neural representations.

We are joined by Itzik Ben-Shabat, a Visiting Research Fellow at the Australian National Universit (ANU) and Technion – Israel Institute of Technology as well as the host of the Talking Paper Podcast.

You will learn a core understanding of implicit neural representations, key concepts and terminology, how it's being used in applications today, and Itzik's research into improving output with limit input data.

Episode timeline:

00:00 Intro
01:23 Overview of what implicit neural representations are
04:08 How INR compares and contrasts with a NeRF
08:17 Why did Itzik pursued this line of research
10:56 What is normalization and what are normals
13:13 Past research people should read to learn about the basics of INR
16:10 What is an implicit representation (without the neural network)
24:27 What is DiGS and what problem with INR does it solve?
35:54 What is OG-I NR and what problem with INR does it solve?
40:43 What software can researchers use to understand INR?
49:15 What information should non-scientists be focused to learn about INR?

Рекомендации по теме
Комментарии
Автор

To understand a Signed Distance Function (SDF), imagine a circle with radius R and center at [0, 0]. The equation of its boundary is x^2+y^2=R^2. Rearrange it to x^2+y^2−R^2=0. This represents the SDF, where f(x, y)=0 defines the boundary. The function f(x, y)<0 describes points inside the circle, and f(x, y)>0 represents points outside. Essentially, the SDF indicates how far a point is from the boundary, with the sign showing whether it's inside or outside. You can also write a similar equation for a sphere, but SDF algorithms allow you to do this for any arbitrary geometry.

vaidyt