Artificial neural networks, one of the most successful approaches to supervised learning, were originally inspired by their biological counterparts. However, the most successful learning algorithm for artificial neural networks, backpropagation, is considered biologically implausible. Many believe that the next generation of artificial neural networks should be built upon a better understanding of biological learning. So, for decades, neuroscience and machine learning communities have been trying to bridge the gap between biological and artificial learning, taking advantage of the ever-growing amount of data on the brain and its activity. We contribute to the topic of biologically plausible neuronal learning by building upon and extending the equilibrium propagation learning framework, which has been previously proposed as a more biologically plausible alternative to backpropagation. Specifically, we introduce: a new neuronal dynamics and learning rule for arbitrary network architectures; a sparsity-inducing method able to prune irrelevant connections; a dynamical-systems characterization of the models, using Lyapunov theory.