TY - JOUR
T1 - KERMIT: Knowledge graph completion of enhanced relation modeling with inverse transformation
AU - Li, Haotian
AU - Yu, Bin
AU - Wei, Yuliang
AU - Wang, Kai
AU - Xu, Richard Yi Da
AU - Wang, Bailing
N1 - This work is supported by the Shandong Key Research and Development Program, China (No. 2023CXPT065), National Natural Science Foundation of China (No. 62272129) and Taishan Scholars (No. tsqn202408112).
Publisher Copyright:
© 2025 Published by Elsevier B.V.
PY - 2025/5/22
Y1 - 2025/5/22
N2 - Knowledge graph completion (KGC) revolves around populating missing triples in a knowledge graph using available information. Text-based methods, which depend on textual descriptions of triples, often encounter difficulties when these descriptions lack sufficient information for accurate prediction, an issue inherent to the datasets and not easily resolved through modeling alone. To address this and ensure data consistency, we first use large language models (LLMs) to generate coherent descriptions, bridging the semantic gap between queries and answers. Secondly, we utilize inverse relations to create a symmetric graph, thereby providing augmented training samples for KGC. Additionally, we employ the label information inherent in knowledge graphs (KGs) to enhance the existing contrastive framework, making it fully supervised. These efforts have led to significant performance improvements on the WN18RR, FB15k-237 and UMLS datasets. According to standard evaluation metrics, our approach achieves a 3.0% improvement in Hit@1 on WN18RR and a 12.1% improvement in Hit@3 on UMLS, demonstrating superior performance.
AB - Knowledge graph completion (KGC) revolves around populating missing triples in a knowledge graph using available information. Text-based methods, which depend on textual descriptions of triples, often encounter difficulties when these descriptions lack sufficient information for accurate prediction, an issue inherent to the datasets and not easily resolved through modeling alone. To address this and ensure data consistency, we first use large language models (LLMs) to generate coherent descriptions, bridging the semantic gap between queries and answers. Secondly, we utilize inverse relations to create a symmetric graph, thereby providing augmented training samples for KGC. Additionally, we employ the label information inherent in knowledge graphs (KGs) to enhance the existing contrastive framework, making it fully supervised. These efforts have led to significant performance improvements on the WN18RR, FB15k-237 and UMLS datasets. According to standard evaluation metrics, our approach achieves a 3.0% improvement in Hit@1 on WN18RR and a 12.1% improvement in Hit@3 on UMLS, demonstrating superior performance.
KW - Knowledge graph completion (KGC)
KW - Large language models (LLMs)
KW - Supervised contrastive learning
UR - http://www.scopus.com/inward/record.url?scp=105007004476&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2025.113500
DO - 10.1016/j.knosys.2025.113500
M3 - Journal article
AN - SCOPUS:105007004476
SN - 0950-7051
VL - 324
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 113500
ER -