A dynamical systems’ perspective of the expectation-maximization-like algorithms
The Expectation-Maximization (EM) algorithm is one of the most popular methods used to solve the problem of distribution-based clustering in unsupervised learning. In this talk, we propose a dynamical systems perspective of the EM algorithm. More precisely, we can analyze the EM algorithm as a nonlinear state-space dynamical system. This algorithm belongs to a large class of iterative algorithms known as proximal point methods. In particular, we re-interpret limit points of the EM algorithm and other local maximizers of the likelihood function it seeks to optimize as equilibria in its dynamical system representation. Furthermore, we propose to assess its convergence as asymptotic stability in the sense of Lyapunov. Consequently, we proceed by leveraging recent results regarding discrete-time Lyapunov stability theory in order to establish asymptotic stability (and thus, convergence) in the dynamical system representation of the EM algorithm.
Next, we show how to leverage leveraging tools from robust control theory, particularly integral quadratic constraints (IQCs) to interpret EM-like algorithm as linear time-invariant (LTI) systems with a feedback nonlinearity. This analysis allows us to design a novel EM-like algorithm for Gaussian mixture models (GMMs). Furthermore, it allows us to establish bounds on the convergence rates of the studied algorithms. In particular, the derived bounds for our proposed EM-like algorithm generalize bounds found in the literature for the EM algorithm on GMMs.