ACACES 2023: Neuromorphic computing: from theory to applications, Lecture 1 – Yulia Sandamirskaya

preview_player
Показать описание
Join Yulia Sandamirskaya, head of the Cognitive Computing in Life Sciences research centre at Zurich University of Applied Sciences (ZHAW) and senior researcher in Intel's Neuromorphic Computing Lab, for a journey into the theory behind neuromorphic computing. This is the first lecture from Yulia's course at ACACES, the HiPEAC summer school.

Further information:
Рекомендации по теме
Комментарии
Автор

Both analog encoding of signals (the classic continuous ANN's, perceptron and offspring) and the spiking style, have their merits. To use an analogy from electronics, the spiking has a strong element of "Schmitt trigger" to it = threshold detection and suppression of "noise" (insignificant fluctuations in the analog input). In a way, spiking distills important events out of a sea of noisy continuous signals, and helps prevent the noise from cumulating along the signal path. It is a filtering mechanism. The flip side of the same coin is, that you lose "resolution" in the signal transferred, and introduce a processing delay. Different "applications" may favour this or that optimization.

Knowing a bit of control systems theory, the debate about "how to code a continuous scale using spiking signals (pulses) and make it quick", there are some obvious answers.

If you are limited by a maximum firing rate of a spiking neuron, your option is just to use a continuous analog value. That is "instantaneous". Exactly what response time "instantaneous" means, that's subject to a further analysis of your transmitter and receiver (two neurons?) = what the propagation time is from some "inner current state" of the TX party to its physical output voltage (down to some precision spec) and what the propagation time is in the RX party, from the input voltage into its "relevant inner current state". In a continuous analog signal, other circumstances equal, it should be easier/faster/cheaper to work with the continuous analog value directly, rather than if you need to use pulses (spikes) and either wait for a spike before you act, or count them / integrate them over time.

Mrs. Sandamirskaya has actually mentioned prior in her lecture, that spikes are neatly transferred as digital events. If the spikes are relatively rare, and you have a digital communication network of some sort, with nearly infinite bandwidth (compared to the spike sequences), it's a no-brainer to transport spikes as digital messages. It can be a very flexible arrangement of topology, minimizing fixed and physical wiring. But, if you start to struggle with the spike rates, i.e. your communication bandwidth becomes clogged / message forwarding capacity becomes a bottleneck, there are possibly savings available from reducing the number of individual events to transport, such as transporting fast continuous signals as streams of analog values, rather than rate-encoded as pulse trains. Each analog byte can carry an equivalent information to a couple dozen spikes (or timeslots in "time to pulse" precise timing). Still bandwidth-intensive, and not so easy to route across some topology... Luckily, fast closed-loop control tends to be characteristic of relatively simple and bounded tasks (motor functions / motion control) so the fast signal paths potentially need not reach very far / be overly flexible in topology. See that example with a single "looming detector neuron" in the grasshopper brain.

To me it all really boils down to: we need more research into "architecture" of our ANN's. A biological brain consists of various specialized "centres" / function blocks / layers. One size of a neuron definitely does not fit all purposes and tasks. And, from a more macro perspective, the block schematic of an "autonomous brain" should resemble something like a car engine, or a modern computer (in the way of a very superficial analogy) rather than a flat babble predictor / generator.

Those analog schematics are beautiful. If someone would be interested to get some insight into what these do, check out the free PDF book by the late Hans Camenzind, called Designing Analog Chips.

xrysf
Автор

Awesome Lecture(r) ! Would have loved to see the rest

johannesdeboeck
Автор

At 52' 57" there is some discussion of attractor dynamics ... this is pure gold. Thank you for mentioning this. I'm not an expert on neurons but I know a bit about attractors ... very interesting subject !!! Great discussion :)

pygmalionsrobot
Автор

Great session! Looking forward to the second lecture

junaidrahman
Автор

Fantastic lecture. I would pay to access the entire course.

bxcebem
Автор

1:13:04, so sense of causality is indicated in brain by synapse magnitude? and is that then how brain interprets entropy and therefore time?

Автор

I think the percentages are wrong. The brain, compared to the whole body,
consumes 20% of the power by weighing around 2%

delgaldo
Автор

Third comment.
40:07 How can you model a neuron when you don't know what it is doing? It is like modelling a car when you don't know what a motor is.
Typical example for this western civilization controlled by americans. 😁

Dr.Z.Moravcik-inventor-of-AGI