MIT Deep Learning Genomics - Lecture 3 - Convolutional Neural Networks CNNs (Spring 2020)

preview_player
Показать описание
MIT 6.874 Lecture 3. Spring 2020
Slides credit: 6.S191 (Alexander Amini, Ava Soleimany), Dana Erlich, ParamVirSingh, David Gifford, Manolis Kellis

1. Scene understanding and object recognition for machines (and humans)
– Scene/object recognition challenge. Illusions reveal primitives, conflicting info
– Human neurons/circuits. Visual cortex layers==abstraction. General cognition
2. Classical machine vision foundations: features, scenes, filters, convolution
– Spatial structure primitives: edge detectors & other filters, feature recognition
– Convolution: basics, padding, stride, object recognition, architectures
3. CNN foundations: LeNet, de novo feature learning, parameter sharing
– Key ideas: learn features, hierarchy, re-use parameters, back-prop filter learning
– CNN formalization: representations(Conv+ReLU+Pool)*N layers + Fully-connected
4. Modern CNN architectures: millions of parameters, dozens of layers
– Feature invariance is hard: apply perturbations, learn for each variation
– ImageNet progression of best performers
– AlexNet: First top performer CNN, 60M parameters (from 60k in LeNet-5), ReLU
– VGGNet: simpler but deeper (8 to 19 layers), 140M parameters, ensembles
– GoogleNet: new primitive=inception module, 5M params, no FC, efficiency
– ResNet: 152 layers, vanishing gradients: fit residuals to enable learning
5. Countless applications: General architecture, enormous power
– Semantic segmentation, facial detection/recognition, self-driving, image colorization, optimizing pictures/scenes, up-scaling, medicine, biology, genomicse
Рекомендации по теме
Комментарии
Автор

Thank you Manolis for putting your lectures online for us to hear, much appreciated. I would never get into a place like MIT. So happy I can hear you on youtube.

xenajade
Автор

Thank you, super helpful. But how does this relate to genomics?

abcd