Question Answering¶
Knowledge Base Question Answering¶
Answer Selection¶
Note: For Answer Selection, sequence length is an important parameter to be tuned.
paper | authors | year | TrecQA (TRAIN-ALL): MAP | TrecQA (TRAIN-ALL): MRR | TrecQA (clean): MAP | TrecQA (clean): MRR | TrecQA (TRAIN): MAP | TrecQA (TRAIN): MRR | WikiQA: MAP | WikiQA: MRR | InsuranceQA-test1: P@1 | InsuranceQA-test2: P@1 | SemEval-cQA: MAP | SemEval-cQA: MRR | code |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering | Di Wang and Eric Nyberg | 2015 | 0.7134 | 0.7913 | |||||||||||
Learning to rank short text pairs with convolutional deep neural networks | Aliaksei Severyn | 2015 | 0.7459 | 0.8078 | 0.7329 | 0.7962 | |||||||||
Wikiqa:Achallenge dataset for open-domain question answering | Yi Yang | 2015 | 0.652 | 0.6652 | |||||||||||
Abcnn: Attention-based convolutional neural network for modeling sentence pairs | Wenpeng Yin, Hinrich Sch¨ utze | 2016 | 0.6921 | 0.7108 | https://github.com/yinwenpeng/Answer_Selection | ||||||||||
aNMM : Ranking Short Answer Texts with Attention-Based Neural Matching Model | Liu Yang1 Qingyao Ai1 Jiafeng Guo2 W. Bruce Croft | 2016 | 0.7495 | 0.8109 | 0.7417 | 0.8102 | |||||||||
Attentive pooling networks | Cicero dos Santos | 2016 | 0.753 | 0.8511 | 0.6886 | 0.6957 | 0.717 | 0.664 | |||||||
Convolutional Neural Networks vs . Convolution Kernels : Feature Engineering for Answer Sentence Reranking | Kateryna Tymoshenko† and Daniele Bonadiman† and Alessandro Moschitti | 2016 | 0.7518 | 0.8553 | 0.7417 | 0.7588 | |||||||||
Employing External Rich Knowledge for Machine Comprehension | BingningWang, Shangmin Guo, Kang Liu, Shizhu He, Jun Zhao | 2016 | 0.6936 | 0.7094 | |||||||||||
Improved Representation Learning for Question Answer Matching | Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou | 2016 | 0.753 | 0.83 | 0.69 | 0.648 | |||||||||
Inner attention based recurrent neural networks for answer selection | BingningWang, Kang Liu, Jun Zhao | 2016 | 0.7369 | 0.8208 | 0.7341 | 0.7418 | 0.7011 | 0.6514 | |||||||
LSTM-based Deep Learning Models for non-factoid answer selection | Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou | 2016 | 0.7279 | 0.824 | 0.681 | 0.633 | |||||||||
Modeling relational information in question-answer pairs with convolutional neural networks | Aliaksei Severyn | 2016 | 0.7654 | 0.8186 | 0.7325 | 0.8018 | 0.6951 | 0.7107 | |||||||
Neural variational inference for text processing | Yishu Miao | 2016 | 0.6886 | 0.7069 | |||||||||||
Noise-contrastive estimation for answer selection with deep neural networks | Jinfeng Rao1 , Hua He1 , and Jimmy Lin | 2016 | 0.78 | 0.834 | 0.801 | 0.877 | 0.709 | 0.723 | https://github.com/Jeffyrao/pairwise-neural-network | ||||||
Pairwise word interaction modeling with deep neural networks for semantic similarity measurement | Hua He1 and Jimmy Lin | 2016 | 0.7588 | 0.8219 | 0.709 | 0.7234 | |||||||||
Sentence similarity learning by lexical decomposition and composition | ZhiguoWang and Haitao Mi and Abraham Ittycheriah | 2016 | 0.7058 | 0.7226 | |||||||||||
A compare aggregate model for matching text sequences | ShuohangWang | 2017 | 0.7433 | 0.7545 | 0.756 | 0.734 | https://github.com/shuohangwang/SeqMatchSeq | ||||||||
A compare aggregate model with dynamic-clip attention for answer selection | Weijie Bian, Si Li, Zhao Yang, Guang Chen, Zhiqing Lin | 2017 | 0.821 | 0.899 | 0.754 | 0.764 | https://github.com/wjbianjason/Dynamic-Clip-Attention | ||||||||
A Hybrid Framework for Text Modeling with Convolutional RNN | ChenglongWang, Feijun Jiang, Hongxia Yang | 2017 | 0.7427 | 0.7504 | 0.714 | 0.683 | |||||||||
Bilateral multi-perspective matching for natural language sentences | ZhiguoWang,Wael Hamza, Radu Florian | 2017 | 0.802 | 0.875 | 0.718 | 0.731 | https://github.com/zhiguowang/BiMPM | ||||||||
Enhancing Recurrent Neural Networks with Positional Attention for Question Answering | Qin Chen1, Qinmin Hu1, Jimmy Xiangji Huang2, Liang He1,3 andWeijie An | 2017 | 0.7814 | 0.8513 | 0.7212 | 0.7312 | |||||||||
Inter-weighted alignment network for sentence pair modeling | Gehui Shen Yunlun Yang Zhi-Hong Deng | 2017 | 0.822 | 0.889 | 0.733 | 0.75 | |||||||||
Learning to Rank Question Answer Pairs with Holographic Dual LSTM Architecture | Yi Tay1, Luu Anh Tuan2, and Siu Cheung Hui3 | 2017 | 0.7499 | 0.8153 | 0.752 | 0.8146 | |||||||||
On the Benefit of Incorporating External Features in a Neural Architecture for Answer Sentence Selection | Ruey-Cheng Chen, Evi Yulianti, Mark Sanderson,W. Bruce Croſt | 2017 | 0.782 | 0.837 | 0.701 | 0.718 | |||||||||
Ranking Kernels for Structures and Embeddings : A Hybrid Preference and Classification Model | Kateryna Tymoshenko† and Daniele Bonadiman† and Alessandro Moschitti | 2017 | 0.7219 | 0.7408 | 0.771 | 0.8345 | https://github.com/iKernels/RelTextRank | ||||||||
A Multi-View Fusion Neural Network for Answer Selection | Lei Sha,∗ Xiaodong Zhang,∗ Feng Qian, Baobao Chang, Zhifang Sui | 2018 | 0.7462 | 0.7576 | 0.8005 | 0.8718 | |||||||||
CA-RNN : Using Context-Aligned Recurrent Neural Networks for Modeling Sentence Similarity | Qin Chen,1 Qinmin Hu,1 Jimmy Xiangji Huang,2 Liang He | 2018 | 0.8227 | 0.8886 | 0.7358 | 0.745 | |||||||||
CAN : Enhancing Sentence Similarity Modeling with Collaborative and Adversarial Network | Qin Chen1, Qinmin Hu, Jimmy Xiangji Huang3 and Liang He | 2018 | 0.841 | 0.9168 | 0.7303 | 0.7431 | |||||||||
Co-Stack Residual Affinity Networks with Multi-level Attention Refinement for Matching Text Sequences | Yi Tay1, Luu Anh Tuan2, and Siu Cheung Hui3 | 2018 | 0.854 | 0.935 | |||||||||||
Context-Aware Answer Sentence Selection With Hierarchical Gated Recurrent Neural Networks | Chuanqi Tan , FuruWei, Qingyu Zhou, Nan Yang, Bowen Du ,Weifeng Lv, and Ming Zhou | 2018 | 0.7638 | 0.7825 | |||||||||||
Cross Temporal Recurrent Networks for Ranking Question Answer Pairs | Yi Tay1, Luu Anh Tuan2, and Siu Cheung Hui3 | 2018 | 0.7712 | 0.8384 | 0.7582 | 0.8233 | |||||||||
End-to-End Quantum-like Language Models with Application to Question Answering | Peng Zhang1,∗, Jiabin Niu1, Zhan Su1, BenyouWang2, Liqun Ma3, Dawei Song | 2018 | 0.7589 | 0.8254 | 0.6496 | 0.6594 | |||||||||
Hermitian Co-Attention Networks for Text Matching in Asymmetrical Domains | Yi Tay1, Anh Tuan Luu2, Siu Cheung Hui | 2018 | 0.784 | 0.895 | 0.743 | 0.756 | |||||||||
Hyperbolic representation learning for fast and efficient neural question answering | Yi Tay1, Luu Anh Tuan2, and Siu Cheung Hui3 | 2018 | 0.77 | 0.825 | 0.784 | 0.865 | 0.712 | 0.727 | 0.795 | null | https://github.com/vanzytay/WSDM2018_HyperQA | ||||
Knowledge as A Bridge: Improving Cross-domain Answer Selection with External Knowledge | Yang Deng1, Ying Shen1,∗, Min Yang2, Yaliang Li3, Nan Du3,Wei Fan3, Kai Lei1 | 2018 | 0.797 | 0.85 | |||||||||||
Knowledge-aware Attentive Neural Network for Ranking Question Answer Pairs | Ying Shen1, Yang Deng1, Min Yang2, Yaliang Li3, Nan Du3,Wei Fan3, Kai Lei | 2018 | 0.7921 | 0.8444 | 0.8038 | 0.8846 | 0.7323 | 0.7494 | |||||||
Multihop Attention Networks for Question Answer Matching | Nam Khanh Tran, Claudia Niederée | 2018 | 0.813 | 0.893 | 0.722 | 0.738 | 0.705 | 0.669 | https://github.com/namkhanhtran/nn4nqa | ||||||
Recurrently Controlled Recurrent Networks | Yi Tay1, Luu Anh Tuan2, and Siu Cheung Hui3 | 2018 | 0.779 | 0.882 | 0.724 | 0.737 | https://github.com/vanzytay/NIPS2018_RCRN | ||||||||
Self-Training for Jointly Learning to Ask and Answer Questions | Mrinmaya Sachan | 2018 | 0.798 | 0.854 | 0.754 | 0.753 | |||||||||
Semantic Linking in Convolutional Neural Networks for Answer Sentence Selection | Massimo Nicosia∗ and Alessandro Moschitti† | 2018 | 0.7793 | 0.8489 | 0.7224 | 0.7391 |