Spatially coherent latent topic model for concurrent segmentation and classification of objects and scenes

Publication
Oct 14, 2007
[Work published prior to Yahoo]
Abstract

We present a novel generative model for simultaneously recognizing and segmenting object and scene classes. Our model is inspired by the traditional bag of words representation of texts and images as well as a number of related generative models, including probabilistic latent semantic analysis (pLSA) and latent Dirichlet allocation (LDA). A major drawback of the pLSA and LDA models is the assumption that each patch in the image is independently generated given its corresponding latent topic. While such representation provides an efficient computational method, it lacks the power to describe the visually coherent images and scenes. Instead, we propose a spatially coherent latent topic model (spatial-LTM). Spatial-LTM represents an image containing objects in a hierarchical way by over-segmented image regions of homogeneous appearances and the salient image patches within the regions. Only one single latent topic is assigned to the image patches within each region, enforcing the spatial coherency of the model. This idea gives rise to the following merits of spatial-LTM: (1) spatial-LTM provides a unified representation for spatially coherent bag of words topic models; (2) spatial-LTM can simultaneously segment and classify objects, even in the case of occlusion and multiple instances; and (3) spatial-LTM can be trained either unsupervised or supervised, as well as when partial object labels are provided. We verify the success of our model in a number of segmentation and classification experiments.

  • IEEE International Conference on Computer Vision (ICCV 2007)
  • Conference/Workshop Paper

BibTeX