Volume 12, Number 06, March 2022
An Evaluation Dataset for Legal Word Embedding: A Case Study on Chinese Codex
Authors
Chun-Hsien Lin and Pu-Jen Cheng, National Taiwan University, Taiwan
Abstract
Word embedding is a modern distributed word representations approach and widely used in many natural language processing tasks. Converting the vocabulary in a legal document into a word embedding model facilitates subjecting legal documents to machine learning, deep learning, and other algorithms and subsequently performing the downstream tasks of natural language processing vis-à-vis, for instance, document classification, contract review, and machine translation. The most common and practical approach of accuracy evaluation with the word embedding model uses a benchmark set with linguistic rules or the relationship between words to perform analogy reasoning via algebraic calculation. This paper proposes establishing an 1,134 Legal Analogical Reasoning Questions Set (LARQS) from the 2,388 Chinese Codex corpus using five kinds of legal relations, which are then used to evaluate the accuracy of the Chinese word embedding model. Moreover, we discovered that legal relations might be ubiquitous in the word embedding model.
Keywords
Legal Word Embedding, Chinese Word Embedding, Word Embedding Benchmark, Legal Term Categories.