Volume 9, Number 3
Alaidine Ben Ayed1 Ismaïl Biskri1, 2 and Jean-Guy Meunier1, 1Université du Québec à Montréal (UQAM), Canada,
2Université du Québec à TroisRivières (UQTR), Canada
Evaluating automatically generated summaries is not an effortless task. Despite the fact that significant advances have been made in this context during the last two decades, it still remains a challenging resaerch problem. In this paper, we present VSMbM; a new metric for automatically generated text summaries evaluation. VSMbM is based on vector space modelling. It gives insights on to which extent retention and fidelity are met in the generated summaries. Three variants of the proposed metric, namely PCA-VSMbM, ISOMAP-VSMbM and tSNE-VSMbM are tested and compared to Recall-Oriented Understudy for Gisting Evaluation (ROUGE): a standard metric used to evaluate automatically generated summaries. Conducted experiments on the Timeline17 dataset show that VSMbM scores are highly correlated to the state-of-the-art Rouge ones.
Automatic Text Summarization, Automatic summary evaluation, Vector space modelling.