您现在的位置:首页> 外文会议>Annual meeting of the Society for Computation in Linguistics >文献详情

【6h】Jabberwocky Parsing: Dependency Parsing with Lexical Noise

机译Jabberwocky解析:依赖分析与词汇噪声

【摘要】Parsing models have long benefited from the use of lexical information, and indeed current state-of-the art neural network models for dependency parsing achieve substantial improvements by benefiting from distributed representations of lexical information. At the same time, humans can easily parse sentences with unknown or even novel words, as in Lewis Carroll's poem Jabberwocky. In this paper, we carry out jabberwocky parsing experiments, exploring how robust a state-of-the-art neural network parser is to the absence of lexical information. We find that current parsing models, at least under usual training regimens, are in fact overly dependent on lexical information, and perform badly in the jabberwocky context. We also demonstrate that the technique of word dropout drastically improves parsing robustness in this setting, and also leads to significant improvements in out-of-domain parsing.

【摘要机译】解析模型长期以来一直受益于词汇信息的使用,而事实上,最新的依赖关系解析神经网络模型通过受益于词汇信息的分布式表示而获得了实质性的改进。同时,就像刘易斯·卡罗尔(Lewis Carroll)的诗《 Jabberwocky》一样,人类可以轻松地解析带有未知甚至新颖词的句子。在本文中,我们进行了jabberwocky解析实验,探索了最先进的神经网络解析器在缺少词汇信息的情况下的稳定性。我们发现,至少在通常的训练方案下,当前的解析模型实际上过度依赖词汇信息,并且在jabberwocky上下文中表现不佳。我们还证明,在这种情况下,单词丢弃技术可以显着提高解析的鲁棒性,并且还可以显着改善域外解析。

【作者】Jungo Kasai;Robert Frank;

【作者单位】University of Washington; Yale University;

【年(卷),期】2019(),

【年度】2019

【页码】113-123

【总页数】11

【原文格式】PDF

【正文语种】eng

【中图分类】;

【关键词】

  • 联系方式:010-58892860转803 (工作时间) 18141920177 (微信同号)
  • 客服邮箱:kefu@zhangqiaokeyan.com
  • 京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-1 六维联合信息科技(北京)有限公司©版权所有
  • 客服微信
  • 服务号