Training Spiking Neural Networks Using Lessons From Deep Learning

preview_player
Показать описание

Jason Eshraghian is a post-doctoral researcher with the Department of Electrical Engineering and Computer Science at the University of Michigan. He received the Bachelor of Engineering and the Bachelor of Laws degrees from The University of Western Australia, where he also obtained his Ph.D. degree. He currently serves as the secretary-elect of the IEEE Neural Systems and Applications Technical Committee and is a consultant to several medical-tech startups. He was awarded the 2019 IEEE VLSI Systems Best Paper Award, the 2019 IEEE AICAS Best Paper Award, and the Best Live Demonstration Award at the 2020 IEEE International Conference on Electronics, Circuits, and Systems. He is a recipient of the Fulbright, Endeavour, and Forrest Research Fellowships. His current research interests include neuromorphic computing and spiking neural networks.
Рекомендации по теме
Комментарии
Автор

Great video! I'm currently in an internship on SNNs and never studied them before. This really helped me understand all the things I've read and put them together. It's great for understanding how to train an SNN in practice. Thanks mate!

Mark-slbv
Автор

Thank you so very much for uploading this!

syvisaur
Автор

@JasoNeuro Very nice presentation, liked it a lot

TileBitan
Автор

hi, great presentation. is there a way i can get the presentation slides.

notsobad
Автор

Hey Got any Tutorials to Learn SnnTorch

maliks.a.
Автор

Normally neurons r not at rest, they r actively using ATP to maintain me brain potentials

samarthjain
Автор

Gpt3 is developed by OpenAI not deepmind

shadysaeed
Автор

Why Americans misspronounce "temporal"? :(

deliciouspops
Автор

Wrong. We should also care about the timing. Otherwise, things like coincidence detection and hebbian learning wouldn't work.

I also care about feedback loops which is not possible without integration.

The neuron model you showed is also just a motor neuron out of hundreds of other types, like there are many kinds of neurons with different timings and width of spike, you actually think this is for nothing? Obviouesly not. Allowing different neurons to only interact through specific waves and others through "more spikey" waves might introduce interesting features in the network, like neurons just for specialized purposes.

I cannot make any claims but if you're going to ignore all these stuff, why talk about biological brains in the first place. Brains don't even use backpropagation which is obviously one of the reasons why they're more efficient.

nullbeyondo