About Project
This leaderboard represents the culmination of the capstone project 'Knowledge Graph Based Question Answering Leaderboard' by Kim Songi (19085436D). It provides a comprehensive ranking of simple question answering models based on knowledge graphs.
The final rankings are derived using the formula established in the Fair Benchmark for Unsupervised Node Representation (2022). This approach ensures a rigorous and standardized evaluation of model performance across multiple datasets.
KGQA Models Leaderboard
| Rank | Model | Author | Year |
|---|---|---|---|
| 1 | KEQA | Huang et al. | 2019 |
| 2 | BertQA | Han et al. | 2021 |
| 3 | BuboQA | Mohammed et al. | 2017 |
Calculation Methodology
Original Performance Metrics
| Model | Dataset Accuracy | ||
|---|---|---|---|
| SimQ | WebQSP | FBQ | |
| KEQA (2019) | 0.754 | 0.651 | 0.273 |
| BertQA | 0.744 | 0.637 | 0.429 |
| BuboQA | 0.745 | 0.622 | 0.373 |
Ranked Performance
| Model | Dataset Rank | ||
|---|---|---|---|
| SimQ | WebQSP | FBQ | |
| KEQA (2019) | 1 | 1 | 3 |
| BertQA | 3 | 2 | 1 |
| BuboQA | 2 | 3 | 2 |
Overall Rank Calculation
The overall rank is determined by the sum of the logarithms of the individual ranks. A lower score indicates a better performance.
- KEQA log(1) + log(1) + log(3) = 0.447 Rank 1
- BertQA log(3) + log(2) + log(1) = 0.748 Rank 2
- BuboQA log(2) + log(3) + log(2) = 1.049 Rank 3
References & Proven Knowledge
X. Huang, J. Zhang, D. Li, and P. Li, "Knowledge Graph Embedding Based Question Answering," in Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 105-113, 2019.
View Paper (DOI)N. Han, H. Noji, K. Hayashi, H. Takamura, and Y. Miyao, "Probing Simple Factoid Question Answering Based on Linguistic Knowledge," Journal of Natural Language Processing, vol. 28, no. 4, pp. 938–964, 2021.
View Paper (DOI)S. Mohammed, P. Shi, and J. Lin, "Strong Baselines for Simple Question Answering over Knowledge Graphs with and without Neural Networks," in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, 2018.
View Paper (ACL Anthology)Z. Guo et al., "Fair Benchmark for Unsupervised Node Representation Learning," Algorithms, vol. 15, no. 10, p. 379, 2022.
View Paper (DOI)A. Bordes, N. Usunier, S. Chopra, and J. Weston, "Large-scale Simple Question Answering with Memory Networks," 2015.
View Paper (ArXiv)J. Berant, A. Chou, R. Frostig, and P. Liang, "Semantic parsing on freebase from question-answer pairs," in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013, pp. 1533-1544.
View Paper (ACL Anthology)K. Jiang, D. Wu, and H. Jiang, "FreebaseQA: A New Factoid QA Data Set Matching Trivia-Style Question-Answer Pairs with Freebase," in NAACL-HLT (1), 2019, pp. 318-323.
View Paper (ACL Anthology)N. Han, G. Topić, H. Noji, H. Takamura, and Y. Miyao, "An empirical analysis of existing systems and datasets toward general simple question answering," in Proceedings of the 28th International Conference on Computational Linguistics, 2020, pp. 5321-5334.
View Paper (ACL Anthology)