Volume 12, Number 22, December 2022
Word Embedding Interpretation using Co-Clustering
Authors
Zainab Albujasim1, Diana Inkpen2 and Yuhong Guo2, 1Carleton University, Canada, 2University of Ottawa, Canada
Abstract
Word embedding is the foundation of modern language processing (NLP). In the last few decades, word representation has evolved remarkably resulting in an impressive performance in NLP downstream applications. Yet, word embedding's interpretability remains a challenge. In this paper, We propose a simple technique to interpret word embedding. Our method is based on post-processing technique to improve the quality of word embedding and reveal the hidden structure in these embeddings. We deploy Co-clustering method to reveal the hidden structure of word embedding and detect sub-matrices between word meaning and specific dimensions. Empirical evaluation on several benchmarks shows that our method achieves competitive results compared to original word embedding.
Keywords
Word Embedding, Interpretation, Quantization, Post-processing.