filmov
tv
Christopher D Manning: A Neural Network Model That Can Reason (ICLR 2018 invited talk)
![preview_player](https://i.ytimg.com/vi/jpNLp9SnTF8/maxresdefault.jpg)
Показать описание
Abstract: Deep learning has had enormous success on perceptual tasks but still struggles in providing a model for inference. To address this gap, we have been developing Memory-Attention-Composition networks (MACnets). The MACnet design provides a strong prior for explicitly iterative reasoning, enabling it to learn explainable, structured reasoning, as well as achieve good generalization from a modest amount of data. The model builds from the great success of existing recurrent cells such as LSTMs: A MacNet is a sequence of a single recurrent Memory, Attention, and Composition (MAC) cell. However, its design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms. We demonstrate the model’s strength and robustness on the challenging CLEVR dataset for visual reasoning (Johnson et al. 2016), achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more data-efficient, achieving good results from even a modest amount of training data. Joint work with Drew Hudson.
Christopher D Manning: A Neural Network Model That Can Reason (ICLR 2018 invited talk)
From Symbolic AI to Deep Learning in NLP - Chris Manning Stanford CoreNLP
49 A Neural Network Model That Can Reason Prof Christopher Manning
A Neural Network Model That Can Reason Prof Christopher Manning
What advice do you have for someone breaking in to AI? - Chris Manning & Andrew Ng
Andrew Ng and Chris Manning Discuss Natural Language Processing
Better Architectures for Neural Networks - Chris Manning vs Yann LeCun
Christopher Manning - 'Building Neural Network Models That Can Reason' (TCSDLS 2017-2018)
Yann LeCun, Christopher Manning on Innate Priors in Deep Learning Systems at Stanford AI
Chomsky vs Shannon approaches to NLP and AI - Chris Manning Stanford OpenNLP creator
Deep Learning for Machine Translation - Chris Manning Stanford CoreNLP
Deep Learning for Question Answering - Chris Manning Stanford CoreNLP
Large Language Models and BERT - Chris Manning Stanford CoreNLP
Emergent linguistic structure in deep contextual neural word representations - Chris Manning
Chris Manning - Meaning and Intelligence in Language Models (COLM 2024)
[SAIF2020] Day2: Natural Language Processing - Christopher Manning | Samsung
Christopher Manning: How do we get computers to understand human language?
Talk 5 – Christopher Manning
On Large Language Models for Understanding Human Language Christopher Manning
Deep Learning for Speech Recognition & Speech Synthesis - Chris Manning Stanford CoreNLP creator
Knowledge is embedded in language neural networks but can they reason?
Lecture 1 | Natural Language Processing with Deep Learning
What does the future hold for Natural Language Processing? - Andrew Ng & Chris Manning
Deep Learning for NLP Composing Concepts - Chris Manning Stanford CoreNLP
Комментарии