filmov
tv
Все публикации
0:39:57
Tomasz Steifer: Ehrenfeucht-Haussler rank and chain of thought
0:52:06
Kartik Ahuja: On Provable Length and Compositional Generalization
0:50:22
Lekai Chen: LLMs as Probabilistic Minimally Adequate Teachers for DFA Learning
0:32:05
Robert Csordas
0:45:42
Eran Malach: Universal Length Generalization with Turing Programs
0:57:57
Dan Friedman: Representing Rule-based Chatbots with Transformers
0:48:38
Keyon Vafa: Evaluating the World Model Implicit in a Generative Model
0:54:15
Chris Köcher: Hard Attention Transformers on Data Sequences: A Formal Language Theoretic Perspective
0:47:09
Anej Svete: Transformers Can Represent n-gram Language Models
0:48:38
Yash Sarrof: The Expressive Capacity of State Space Models: A Formal Language Perspective
0:38:22
Anton Xue: Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference
0:55:19
Zhiyuan Li: Chain Of Thought Empowers Transformers To Solve Inherently Serial Problems
0:23:24
Yingshan Chang: Language Models Need Inductive Biases to Count Inductively
0:51:24
Alessandro Ronca: On the Expressivity of Recurrent Neural Cascades
1:05:49
Martin Berger: Fast grammar inference on GPUs
0:41:58
Daniel Hsu: Transformers, parallel computation and logarithmic depth
0:22:42
Michaël Rizvi: Simulating Weighted Automata over Sequences and Trees with Transformers
0:36:11
Mark Rofin: Why are Sensitive Functions Hard for Transformers?
0:36:08
Brian DuSell: Stack Attention
0:45:43
Will Merrill: The Illusion of State in State-Space Models
0:38:21
Nur Lan: Bridging the Empirical-Theoretical Gap in Neural Network Formal Language Learning
0:51:07
Dylan Zhang: Transformer-Based Models Are Not Yet Perfect At Learning 2 Emulate Structural Recursion
0:56:32
Giuseppe De Giacomo
0:54:56
Alexander Kozachinskiy: Logical Languages Accepted by Transformer Encoders with Hard Attention
Вперёд