Jeremy Reffin: Some Theoretical Underpinnings for Language Processing

preview_player
Показать описание
The research field of distributional semantics predates the era of Deep Learning in natural language processing - but its ideas provide some intuition as to how and why the simple structures of neural networks are able to develop and demonstrate aspects of language competence. I will outline what those ideas are and illustrate how they tie back to theoretically coherent models of language developed by Wittgenstein and Ferdinand de Saussure around 100 years ago. Taking these old ideas seriously gives coherent theoretical underpinnings to current work and also offers interesting implications for how to take language processing forwards, which I will discuss. Looking ahead, I think it provides an optimistic view of the prospects for developing more general language competence using quite simple underlying architectures.

Bio: Following Jeremy's undergraduate studies in Natural Sciences at the University of Cambridge, he completed a DPhil in Biomedical Engineering at the University of Sussex. Jeremy subsequently enjoyed a 20-year business career as a consultant, a venture capitalist, and a private equity partner before returning to the academic world in 2009. Since 2010, he has co-founded two AI research laboratories at the University of Sussex, the Centre for Analysis of Social Media at the think-tank Demos, and a R&D-focused consulting firm, CASM Consulting LLP.

*Sponsors*
Man AHL: At Man AHL, we mix machine learning, computer science and engineering with terabytes of data to invest billions of dollars every day.

Рекомендации по теме