![]() ![]() Understanding natural language questions entails the ability to break down a question into the requisite steps for computing its answer. Transactions of the Association for Computational Linguistics Last, we use Break to train a sequence-to-sequence model with copying that parses questions into QDMR structures, and show that it substantially outperforms several natural baselines.",īreak It Down: A Question Understanding Benchmark We demonstrate the utility of QDMR by showing that (a) it can be used to improve open-domain question answering on the HotpotQA dataset, (b) it can be deterministically converted to a pseudo-SQL formal language, which can alleviate annotation in semantic parsing applications. We develop a crowdsourcing pipeline, showing that quality QDMRs can be annotated at scale, and release the Break dataset, containing over 83K pairs of questions and their QDMRs. QDMR constitutes the ordered list of steps, expressed through natural language, that are necessary for answering a question. In this work, we introduce a Question Decomposition Meaning Representation (QDMR) for questions. Journal = "Transactions of the Association for Computational Linguistics",Ībstract = "Understanding natural language questions entails the ability to break down a question into the requisite steps for computing its answer. Cite (Informal): Break It Down: A Question Understanding Benchmark (Wolfson et al., TACL 2020) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Code additional community code Data = "Break It Down: A Question Understanding Benchmark", Transactions of the Association for Computational Linguistics, 8:183–198. ![]() Break It Down: A Question Understanding Benchmark. Anthology ID: 2020.tacl-1.13 Volume: Transactions of the Association for Computational Linguistics, Volume 8 Month: Year: 2020 Address: Cambridge, MA Venue: TACL SIG: Publisher: MIT Press Note: Pages: 183–198 Language: URL: DOI: 10.1162/tacl_a_00309 Bibkey: wolfson-etal-2020-break Cite (ACL): Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. ![]() Last, we use Break to train a sequence-to-sequence model with copying that parses questions into QDMR structures, and show that it substantially outperforms several natural baselines. Abstract Understanding natural language questions entails the ability to break down a question into the requisite steps for computing its answer. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |