Pay More Attention to Relation Exploration for Knowledge Base Question Answering

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Pay More Attention to Relation Exploration for Knowledge Base Question Answering. / Cao, Yong; Li, Xianzhi; Liu, Huiwen; Dai, Wen; Chen, Shuai; Wang, Bin; Chen, Min; Hershcovich, Daniel.

Findings of the Association for Computational Linguistics, ACL 2023. Association for Computational Linguistics (ACL), 2023. p. 2119-2136.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Cao, Y, Li, X, Liu, H, Dai, W, Chen, S, Wang, B, Chen, M & Hershcovich, D 2023, Pay More Attention to Relation Exploration for Knowledge Base Question Answering. in Findings of the Association for Computational Linguistics, ACL 2023. Association for Computational Linguistics (ACL), pp. 2119-2136, 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, Toronto, Canada, 09/07/2023. https://doi.org/10.18653/v1/2023.findings-acl.133

APA

Cao, Y., Li, X., Liu, H., Dai, W., Chen, S., Wang, B., Chen, M., & Hershcovich, D. (2023). Pay More Attention to Relation Exploration for Knowledge Base Question Answering. In Findings of the Association for Computational Linguistics, ACL 2023 (pp. 2119-2136). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.133

Vancouver

Cao Y, Li X, Liu H, Dai W, Chen S, Wang B et al. Pay More Attention to Relation Exploration for Knowledge Base Question Answering. In Findings of the Association for Computational Linguistics, ACL 2023. Association for Computational Linguistics (ACL). 2023. p. 2119-2136 https://doi.org/10.18653/v1/2023.findings-acl.133

Author

Cao, Yong ; Li, Xianzhi ; Liu, Huiwen ; Dai, Wen ; Chen, Shuai ; Wang, Bin ; Chen, Min ; Hershcovich, Daniel. / Pay More Attention to Relation Exploration for Knowledge Base Question Answering. Findings of the Association for Computational Linguistics, ACL 2023. Association for Computational Linguistics (ACL), 2023. pp. 2119-2136

Bibtex

@inproceedings{8a4f5d0bcc6d4aee9b5a1c6fdcd848ac,
title = "Pay More Attention to Relation Exploration for Knowledge Base Question Answering",
abstract = "Knowledge base question answering (KBQA) is a challenging task that aims to retrieve correct answers from large-scale knowledge bases. Existing attempts primarily focus on entity representation and final answer reasoning, which results in limited supervision for this task. Moreover, the relations, which empirically determine the reasoning path selection, are not fully considered in recent advancements. In this study, we propose a novel framework, RE-KBQA, that utilizes relations in the knowledge base to enhance entity representation and introduce additional supervision. We explore guidance from relations in three aspects, including (1) distinguishing similar entities by employing a variational graph auto-encoder to learn relation importance; (2) exploring extra supervision by predicting relation distributions as soft labels with a multi-task scheme; (3) designing a relation-guided re-ranking algorithm for post-processing. Experimental results on two benchmark datasets demonstrate the effectiveness and superiority of our framework, improving the F1 score by 5.8% from 40.5 to 46.3 on CWQ and 5.7% from 62.8 to 68.5 on WebQSP, better or on par with state-of-the-art methods.",
author = "Yong Cao and Xianzhi Li and Huiwen Liu and Wen Dai and Shuai Chen and Bin Wang and Min Chen and Daniel Hershcovich",
note = "Publisher Copyright: {\textcopyright} 2023 Association for Computational Linguistics.; 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 ; Conference date: 09-07-2023 Through 14-07-2023",
year = "2023",
doi = "10.18653/v1/2023.findings-acl.133",
language = "English",
pages = "2119--2136",
booktitle = "Findings of the Association for Computational Linguistics, ACL 2023",
publisher = "Association for Computational Linguistics (ACL)",
address = "United States",

}

RIS

TY - GEN

T1 - Pay More Attention to Relation Exploration for Knowledge Base Question Answering

AU - Cao, Yong

AU - Li, Xianzhi

AU - Liu, Huiwen

AU - Dai, Wen

AU - Chen, Shuai

AU - Wang, Bin

AU - Chen, Min

AU - Hershcovich, Daniel

N1 - Publisher Copyright: © 2023 Association for Computational Linguistics.

PY - 2023

Y1 - 2023

N2 - Knowledge base question answering (KBQA) is a challenging task that aims to retrieve correct answers from large-scale knowledge bases. Existing attempts primarily focus on entity representation and final answer reasoning, which results in limited supervision for this task. Moreover, the relations, which empirically determine the reasoning path selection, are not fully considered in recent advancements. In this study, we propose a novel framework, RE-KBQA, that utilizes relations in the knowledge base to enhance entity representation and introduce additional supervision. We explore guidance from relations in three aspects, including (1) distinguishing similar entities by employing a variational graph auto-encoder to learn relation importance; (2) exploring extra supervision by predicting relation distributions as soft labels with a multi-task scheme; (3) designing a relation-guided re-ranking algorithm for post-processing. Experimental results on two benchmark datasets demonstrate the effectiveness and superiority of our framework, improving the F1 score by 5.8% from 40.5 to 46.3 on CWQ and 5.7% from 62.8 to 68.5 on WebQSP, better or on par with state-of-the-art methods.

AB - Knowledge base question answering (KBQA) is a challenging task that aims to retrieve correct answers from large-scale knowledge bases. Existing attempts primarily focus on entity representation and final answer reasoning, which results in limited supervision for this task. Moreover, the relations, which empirically determine the reasoning path selection, are not fully considered in recent advancements. In this study, we propose a novel framework, RE-KBQA, that utilizes relations in the knowledge base to enhance entity representation and introduce additional supervision. We explore guidance from relations in three aspects, including (1) distinguishing similar entities by employing a variational graph auto-encoder to learn relation importance; (2) exploring extra supervision by predicting relation distributions as soft labels with a multi-task scheme; (3) designing a relation-guided re-ranking algorithm for post-processing. Experimental results on two benchmark datasets demonstrate the effectiveness and superiority of our framework, improving the F1 score by 5.8% from 40.5 to 46.3 on CWQ and 5.7% from 62.8 to 68.5 on WebQSP, better or on par with state-of-the-art methods.

UR - http://www.scopus.com/inward/record.url?scp=85175000091&partnerID=8YFLogxK

U2 - 10.18653/v1/2023.findings-acl.133

DO - 10.18653/v1/2023.findings-acl.133

M3 - Article in proceedings

AN - SCOPUS:85175000091

SP - 2119

EP - 2136

BT - Findings of the Association for Computational Linguistics, ACL 2023

PB - Association for Computational Linguistics (ACL)

T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023

Y2 - 9 July 2023 through 14 July 2023

ER -

ID: 373549208