filmov
tv
Nature vs Nurture: Algorithms vs Neural Networks
Показать описание
How much of what we learn is based on nature (i.e. structure, pre-defined instructions), and how much of it is actually nurtured?
Algorithms are typically fast, reliable, but not very robust to task variation. Neural networks are slow to train, but are versatile in their mappings of input. We take some inspiration from Petar Veličković's work trying to incorporate neural networks to mimic algorithms, and investigate if it is possible to use a versatile structure to encode a fixed algorithm.
Perhaps in the end, we might be asking the wrong questions - algorithms are fixed and good to use for processing data, but is there a need to generalize it to a similar form as neural network? there is a hybrid pipeline that incorporates both algorithms and neural networks in the same pipeline.
I posit that we have both algorithmic processing of data (i.e. cochlea frequency mapping, retina cones and rods), followed by neural network associative learning with fixed structure imbued in the network. This fixed structure should be related to position and movement, as we have similar cells such as place cells, grid cells, headings cells as in rats, which facilitate navigating the world.
The future of finding algorithms may be to replicate natural selection - a multi-agent bottom-up approach (instead of top-down) that combines performant sub-modules in a "soup" to create some performant. Memory will also play a huge role in facilitating processing via algorithms, and I believe the neural network's weights alone might not be sufficient.
________________________________________________________________
Papers discussed:
How to transfer algorithmic reasoning knowledge to
________________________________________________________________
AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.
0:00 Introduction
2:00 Are humans really flexible in our learning….?
5:30 Structural Invariances
13:25 Algorithms provide even stronger bias than structure
15:54 Biological structure mimicking algorithm (Cochlea and Retina)
18:47 Computer Science Algorithm Example: Dijkstra Algorithm
22:08 Neural Networks vs Algorithms
26:07 My thoughts on the learning process
27:40 CLRS Algorithmic Reasoning Benchmark
29:37 Graph Neural Networks (GNN) overview
34:50 Encode-Process-Decode
38:25 In-distribution Results
39:40 Out-of-distribution Results
47:20 Can we transfer learn algorithms?
52:55 My opinion of learning systems
59:28 Future directions for algorithmic reasoning
1:03:05 Conclusion: Learning involves both fixed and learnable components
Algorithms are typically fast, reliable, but not very robust to task variation. Neural networks are slow to train, but are versatile in their mappings of input. We take some inspiration from Petar Veličković's work trying to incorporate neural networks to mimic algorithms, and investigate if it is possible to use a versatile structure to encode a fixed algorithm.
Perhaps in the end, we might be asking the wrong questions - algorithms are fixed and good to use for processing data, but is there a need to generalize it to a similar form as neural network? there is a hybrid pipeline that incorporates both algorithms and neural networks in the same pipeline.
I posit that we have both algorithmic processing of data (i.e. cochlea frequency mapping, retina cones and rods), followed by neural network associative learning with fixed structure imbued in the network. This fixed structure should be related to position and movement, as we have similar cells such as place cells, grid cells, headings cells as in rats, which facilitate navigating the world.
The future of finding algorithms may be to replicate natural selection - a multi-agent bottom-up approach (instead of top-down) that combines performant sub-modules in a "soup" to create some performant. Memory will also play a huge role in facilitating processing via algorithms, and I believe the neural network's weights alone might not be sufficient.
________________________________________________________________
Papers discussed:
How to transfer algorithmic reasoning knowledge to
________________________________________________________________
AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.
0:00 Introduction
2:00 Are humans really flexible in our learning….?
5:30 Structural Invariances
13:25 Algorithms provide even stronger bias than structure
15:54 Biological structure mimicking algorithm (Cochlea and Retina)
18:47 Computer Science Algorithm Example: Dijkstra Algorithm
22:08 Neural Networks vs Algorithms
26:07 My thoughts on the learning process
27:40 CLRS Algorithmic Reasoning Benchmark
29:37 Graph Neural Networks (GNN) overview
34:50 Encode-Process-Decode
38:25 In-distribution Results
39:40 Out-of-distribution Results
47:20 Can we transfer learn algorithms?
52:55 My opinion of learning systems
59:28 Future directions for algorithmic reasoning
1:03:05 Conclusion: Learning involves both fixed and learnable components
Комментарии