Word Distinctivity - quantifying improvement of topic modeling results from n-gramming
DOI:
https://doi.org/10.57805/revstat.v20i2.370Keywords:
Latent Dirichlet allocation, text mining, topic modeling, n-gramming, data cleaning, quantificationAbstract
Text data cleaning is an important but often overlooked step in text mining because it is difficult to quantify the contribution. Therefore, we propose the word distinctivity to measure the improvement of topic modeling results from n-gramming, which preserves special phrases in a corpus. The word distinctivity evaluates the signal strength of a word’s topic assignments, and a high distinctivity means a high posterior proba[1]bility for the word to come from a certain topic. We implemented the latent Dirichlet allocation for topic modeling, and discovered that some special phrases show an increase in word distinctivity, reducing uncertainty in topic identification.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2020 REVSTAT-Statistical Journal
This work is licensed under a Creative Commons Attribution 4.0 International License.