Abstract
This paper proposes a novel Language Model (LM) adaptation method based on Minimum Discrimination Information (MDI). In the proposed method, a background LM is viewed as a discrete distribution and an adapted LM is built to be as close as possible to the background LM, while satisfying unigram constraint. This is due to the fact that there is a limited amount of domain corpus available for the adaptation of a natural language-based intelligent personal assistant system. Two unigram constraint estimation methods are proposed: one based on word frequency in the domain corpus, and one based on word similarity estimated from WordNet. In terms of the adapted LM's perplexity using word frequency in tiny domain corpora (ranging from 30~120 seconds in length) the relative performance improvements are measured at 13.9%~16.6%. Further relative performance improvements (1.5%~2.4%) are observed when WordNet is used to generate word similarities. These successes express an efficient ways for re-scaling and normalizing the conditional distribution, which uses an interpolation-based LM.
Original language | English |
---|---|
Article number | 6415007 |
Pages (from-to) | 1359-1365 |
Number of pages | 7 |
Journal | IEEE Transactions on Consumer Electronics |
Volume | 58 |
Issue number | 4 |
DOIs | |
State | Published - 2012 |
Keywords
- Constraint estimation
- Language model adaptation
- Minimum discriminationinformation
- Tiny domaincorpus