Word Distinctivity - quantifying improvement of topic modeling results from n-gramming

Authors

DOI:

https://doi.org/10.57805/revstat.v20i2.370

Keywords:

Latent Dirichlet allocation, text mining, topic modeling, n-gramming, data cleaning, quantification

Abstract

Text data cleaning is an important but often overlooked step in text mining because it is difficult to quantify the contribution. Therefore, we propose the word distinctivity to measure the improvement of topic modeling results from n-gramming, which preserves special phrases in a corpus. The word distinctivity evaluates the signal strength of a word’s topic assignments, and a high distinctivity means a high posterior proba[1]bility for the word to come from a certain topic. We implemented the latent Dirichlet allocation for topic modeling, and discovered that some special phrases show an increase in word distinctivity, reducing uncertainty in topic identification.

Published

2022-05-03

How to Cite

P. Chai , C. (2022). Word Distinctivity - quantifying improvement of topic modeling results from n-gramming . REVSTAT-Statistical Journal, 20(2), 199–220. https://doi.org/10.57805/revstat.v20i2.370