Minimum discrimination information-based language model adaptation using tiny domain corpora for intelligent personal assistants

Gil Jin Jang, Saejoon Kim, Ji Hwan Kim

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

This paper proposes a novel Language Model (LM) adaptation method based on Minimum Discrimination Information (MDI). In the proposed method, a background LM is viewed as a discrete distribution and an adapted LM is built to be as close as possible to the background LM, while satisfying unigram constraint. This is due to the fact that there is a limited amount of domain corpus available for the adaptation of a natural language-based intelligent personal assistant system. Two unigram constraint estimation methods are proposed: one based on word frequency in the domain corpus, and one based on word similarity estimated from WordNet. In terms of the adapted LM's perplexity using word frequency in tiny domain corpora (ranging from 30~120 seconds in length) the relative performance improvements are measured at 13.9%~16.6%. Further relative performance improvements (1.5%~2.4%) are observed when WordNet is used to generate word similarities. These successes express an efficient ways for re-scaling and normalizing the conditional distribution, which uses an interpolation-based LM.

Original languageEnglish
Article number6415007
Pages (from-to)1359-1365
Number of pages7
JournalIEEE Transactions on Consumer Electronics
Volume58
Issue number4
DOIs
StatePublished - 2012

Keywords

  • Constraint estimation
  • Language model adaptation
  • Minimum discriminationinformation
  • Tiny domaincorpus

Fingerprint

Dive into the research topics of 'Minimum discrimination information-based language model adaptation using tiny domain corpora for intelligent personal assistants'. Together they form a unique fingerprint.

Cite this