Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering
Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering
Blog Article
AbstractAnswering questions that involve multi-step reasoning requires decomposing them and using the answers of Remote Vibrators intermediate steps to reach the final answer.However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples.In this work, we propose a model that computes a representation and denotation for all question spans in a bottom-up, compositional manner using a CKY-style parser.Our model induces latent trees, driven by end-to-end (the answer) supervision only.We show that this inductive bias towards tree structures dramatically improves systematic generalization to out-of- distribution examples, compared to strong baselines on an arithmetic expressions benchmark as well as on C losure, a dataset that focuses on systematic generalization for grounded question answering.
On this challenging Headwear dataset, our model reaches an accuracy of 96.1%, significantly higher than prior models that almost perfectly solve the task on a random, in-distribution split.