Invited Speakers

Tony Jebara, Columbia Univerity 

Tractable Graphical Modeling and Inference

Graphical models and Markov random fields are fundamental tools in machine learning with broad application in areas such as computer vision, speech recognition, and computational biology. We can cast social networks like FaceBook and Epinions as large graphical models where nodes are individuals connected by friendship edges. We can also cast power networks as large graphical models where edges indicate electrical cables between transformer nodes. There are two canonical forms of inference in graphical modeling: maximum a posteriori (MAP) inference, where the most likely configuration is returned; and marginal inference, where a probability distribution over a subset of variables is returned. Both problems are NP-hard when the graphical models have cycles and/or large tree-width. Since learning often reduces to MAP or marginal inference it is also NP-hard to learn general graphical models. How can we tractably solve these problems in practice? Heuristics like sum-product and max-product propagation often work reasonably but formal guarantees have so far been elusive. I will provide formal tractability guarantees by compiling MAP estimation and Bethe marginal inference into a classical problem in combinatorics: finding the maximum weight stable set (MWSS) in a graph. Whenever a graph is perfect, the MWSS problem is solvable in polynomial time (e.g. through linear programming, max-flow or semidefinite programming). Example problems such as matchings, b-matchings, and submodular models are shown to give a perfect MWSS. By making connections to perfect graph theory, new graphical models are shown to compile into perfect graphs and therefore admit tractable inference. Applications include friendship link recommendation, influence estimation in social networks, and failure prioritization of transformers in power networks.

Joint work with Adrian Weller, Nick Ruozzi and Kui Tang.




Anima Anandkumar, UC Irvine

Integrated Structure and Parameter Learning in Latent Tree
Graphical Models


Latent tree graphical models have widespread application due to their tractability in performing inference. We present an integrated approach to learn structure and parameters of latent tree models. Our approach follows a “divide-and-conquer” strategy, and learnsmodels over small groups of variables (where the grouping is obtained through preprocessing). A global solution is obtained in the end through simple merge steps. Our structure learning procedure involves simple
combinatorial operations such as minimum spanning tree construction and local recursive grouping, and our parameter learning is based on the method of moments and involves tensor decompositions. Our method is guaranteed to correctly recover the unknown tree structure and the model parameters with low sample complexity for the class of linear  multivariate latent tree models which includes discrete and Gaussian distributions, and Gaussian mixtures. Our method can be implemented in parallel with computational complexity per worker scaling only logarithmically in the number of observed variables, and linearly in the dimensionality of each variable.