Interest in Multi-Layer Perceptrons (MLPs) has spiked in recent years due to their central role in deep learning technologies. MLPs can be seen as a trainable multi-input multi-output function constructed by composing linear and non-linear functions organized into layers of nodes. A lesser known fact is that sigmoid-based MLPs also admit a probabilistic interpretation. Under this interpretation, the MLP forward-pass can be seen as approximating a marginalization of hidden node activations at each layer under the mean field assumption. In this talk we explore this probabilistic view of MLPs and propose a closed-form approximation beyond mean field. This new approximation takes into consideration the uncertainty of inference at each layer and is thus closer to the true marginalization. Furthermore, it is also shown that such an approximation provides improved performance in Automatic Speech Recognition experiments when used for feature extraction.
Beyond the Mean Field Approximation for Inference in Multi-Layer Perceptrons
June 17, 2014
1:00 pm
Ramon Astudillo
Ramón F. Astudillo obtained the industrial engineering degree with specialization electronics in automatic regulation at the Escuela Politecnica Superior de Ingenieria de Gijon (Spain) in 2005, completing the last two years of this degree with an Erasmus scholarship at the Technische Universität Berlin. In 2006 he worked as an intern at Peiker Acustic researching model-based speech enhancement. On this same year he was awarded with a La Caixa and the German Academic Exchange Service (DAAD) scholarship for research towards the Ph.D. degree. He obtained the title with distinction from the Technische Universität Berlin in 2010 with the thesis "Integration of Short-Time Fourier Domain Speech Enhancement and Observation Uncertainty Techniques for Robust Automatic Speech Recognition". He is currently a Post.-Doc. researcher at INESC-ID/L2F researching both on robust speech recognition and robust natural language processing speech applications in a Bayesian setting.INESC, ISTSeminários
Últimos seminários
Cost-Sensitive Learning to Defer to Multiple Experts
March 2, 2026Large language models (LLMs) have emerged as strong contenders in machine translation. Yet, they often fall behind specialized neural machine…
Fair Federated Learning under Group-Specific Distributed Concept Drift
February 24, 2026Machine learning models can become unfair when different groups experience changes in data over time, a phenomenon called group-specific concept…
Unlocking Latent Discourse Translation in LLMs Through Quality-Aware Decoding
June 17, 2025Large language models (LLMs) have emerged as strong contenders in machine translation. Yet, they often fall behind specialized neural machine…
Speech as a Biomarker for Disease Detection
May 20, 2025Today’s overburdened health systems face numerous challenges, exacerbated by an aging population. Speech emerges as a ubiquitous biomarker with strong…

