Learning Tractable Probabilistic Models

Beijing, China; June 26, 2014; a one-day workshop co-located with ICML 2014.

News
  • (6/12/2014) Workshop program is available
  • (6/10/2014) List of accepted papers is available
  • (5/15/2014) The submission deadline for camera-ready papers is June 9th
  • (3/15/2014) The submission deadline has been extended
  • (2/25/2014) The submission site is open
  • (2/13/2014) We are happy to announce some exciting invited talks
  • (2/12/2013) The call for papers is available

Probabilistic models have had broad impact in machine learning, both in research and industry. Unfortunately, inference in unrestricted probabilistic models is often intractable. Motivated by the importance of efficient inference for large-scale applications, a substantial amount of work has been devoted to learning probabilistic models for which inference is guaranteed to be tractable. Much of this work has focused on restricted model classes, such as graphical models with low treewidth, mixtures thereof, and probabilistic context-free grammars (PCFGs).

More recently, a series of exciting developments has shown that tractable models can in fact be much more expressive than previously thought. As a result, the research focus has shifted to very large probabilistic models that are tractable despite having high treewidth. Examples of these model classes include sum-product networks (SPNs), hinge-loss Markov random fields (HL-MRFs), and tractable higher-order potentials (HOPs). This broad family also includes regularization approaches that ensure that the learnt graphical models have a polynomial circuit representation, approaches that exploit structural properties other than treewidth such as perfectness of the graphical representation, and approaches that bring tractable continuous statistical processes such as determinantal point processes (DPPs) to the discrete world.

Tractable inference is important both in its own right and to enable tractable learning. Approaches to the latter include models where tractable inference leads to efficient likelihood maximization techniques and learning methods that exploit other properties to obtain computationally efficient learning. Examples of recent innovations include using tractable MAP inference for efficient learning, exact likelihood maximization in Markov networks with bounded degree, maximizing a tractable surrogate of the likelihood function, and learning with lifted inference.

While research on the aforementioned topics has seen a lot of activity in recent years, an inclusive meeting has been missing. This workshop will provide a unique opportunity to exchange ideas and insights, leading to a coherent picture of the state-of-the-art, a set of important directions and open problems, and closer collaborations among the different research groups working in this area.