Categorical Deep Learning

preview_player
Показать описание

It presents how Categorical Deep Learning generalises GDL using the constructs introduced previously, and offers a discussion into ways in which its contributions can extent from semantics into syntax. Assumes prior knowledge of basics of geometric deep learning, as well as the previous lecture (Into the Realm Categorical).
Рекомендации по теме
Комментарии
Автор

I would really love to hear about a proper result from category theory that makes a phenomenon in machine learning much easier to understand or better yet, prove some result that was not known before. So far most of what I have seen is that category theory just gives a more abstract way (with lots of notations and names to remember) of describing these objects. This is fine, and in a sense is the basis of most of what we do in algebra, but usually this abstraction produces much more, both in "denoising" problems and reducing them to their cores, and by connecting different problems which seem different at a first glance, and become similar once you abstract them.
For example, you can use complex conjugation to find the real and imaginary parts of a complex number. Once you understand there is the abstract group of order 2 at the background, you suddenly see that it appears in many other places, and the same process of finding "real" and "imaginary" parts works there. For instance, with symmetric\skew symmetric matrices and the transpose function, or with even\odd functions and the f -> f(-x) function. Generalizing this to cyclic groups gives you the discrete Fourier transform (not to mention group representations which are used everywhere). Another example is even elementary results like Lagrange's theorem about the size of a subgroup which must divides the size of a full group, which is very powerful when studying symmetries of objects.
Using category theory in machine learning is a new area, so I don't expect to have strong results as in group theory. However, I would like to have a couple of strong examples showing how powerful this connection can become, and not just mostly say that we can rewrite group action in a more abstract manner, that might also also cover list generation.

eofirdavid
Автор

This is really interesting but let me ask you a general question. Is this really helpful for discovering new architectures or it is good for explaining and unifying architectures when we have already discovered them. I had this tension when I was doing physics: to what extend should I take math seriously?

vahidhosseinzadeh
Автор

Impressive work, I was wondering if dropout or infinite width neural networks could be obtained from categorical framework.

marekglowacki
Автор

is there any python code that explains this research paper?

hellomyfriend_S