Volume 11, Number 3

Evaluation of Semantic Answer Similarity Metrics

  Authors

Farida Mustafazade1 and Peter F. Ebbinghaus2, 1GAM Systematic, 2Teufel Audio

  Abstract

There are several issues with the existing general machine translation or natural language generation evaluation metrics, and question-answering (QA) systems are indifferent in that context. To build robust QA systems, we need the ability to have equivalently robust evaluation systems to verify whether model predictions to questions are similar to ground-truth annotations. The ability to compare similarity based on semantics as opposed to pure string overlap is important to compare models fairly and to indicate more realistic acceptance criteria in real-life applications. We build upon the first to our knowledge paper that uses transformer-based model metrics to assess semantic answer similarity and achieve higher correlations to human judgement in the case of no lexical overlap. We propose cross-encoder augmented bi-encoder and BERTScore models for semantic answer similarity, trained on a new dataset consisting of name pairs of US-American public figures. As far as we are concerned, we provide the first dataset of co-referent name string pairs along with their similarities, which can be used for training.

  Keywords

Question-answering, semantic answer similarity, exact match, pre-trained language models, cross-encoder, bi-encoder, semantic textual similarity, automated data labelling.