Skip to main content

  • Monday, February 5, 2018
  • 11:00 - 11:30

Obozinski (École des Ponts - ParisTech): An SDCA-Powered Inexact Dual Augmented Lagrangian Method for Fast CRF Learning

I'll present an efficient dual augmented Lagrangian formulation to learn conditional random field (CRF) models. The algorithm, which can be interpreted as an inexact gradient method on the multiplier, does not require to perform exact inference iteratively, requires only a fixed number of stochastic clique-wise updates at each epoch to obtain a sufficiently good estimate of the gradient w.r.t. the Lagrange multipliers. We prove that the proposed algorithm enjoys global linear convergence for both the primal and the dual objective. Our experiments show that the proposed algorithm outperforms state-of-the-art baselines in terms of speed of convergence. (Joint work with Shell Xu Hu)