Detecting Conditional Dependence Using Flexible Bayesian Latent Class Analysis

Jaehoon Lee, Kwanghee Jung, Jungkyu Park

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

A fundamental assumption underlying latent class analysis (LCA) is that class indicators are conditionally independent of each other, given latent class membership. Bayesian LCA enables researchers to detect and accommodate violations of this assumption by estimating any number of correlations among indicators with proper prior distributions. However, little is known about how the choice of prior may affect the performance of Bayesian LCA. This article presents a Monte Carlo simulation study that investigates (1) the utility of priors in a range of prior variances (i.e., strongly non-informative to strongly informative priors) in terms of Type I error and power for detecting conditional dependence and (2) the influence of imposing approximate independence on model fit of Bayesian LCA. Simulation results favored the use of a weakly informative prior with large variance–model fit (posterior predictive p–value) was always satisfactory when the class indicators were either independent or dependent. Based on the current findings and the additional literature, this article offers methodological guidelines and suggestions for applied researchers.

Original languageEnglish
Article number1987
JournalFrontiers in Psychology
Volume11
DOIs
StatePublished - 6 Aug 2020

Keywords

  • approximate independence
  • Bayesian latent class analysis
  • conditional dependence
  • model fit
  • prior variance

Fingerprint

Dive into the research topics of 'Detecting Conditional Dependence Using Flexible Bayesian Latent Class Analysis'. Together they form a unique fingerprint.

Cite this