Skip to yearly menu bar Skip to main content


Poster

A fast algorithm to simulate nonlinear resistive networks

Benjamin Scellier


Abstract:

In the pursuit of energy-efficient AI, nonlinear resistive networks are an appealing alternative to GPU-based neural networks. These networks perform computations using the laws of electrical circuits and can be trained using local learning rules for each weight (conductance), while preserving two important computational properties of neural networks: they are universal function approximators and the weight updates perform gradient descent on a cost function. While hardware experiments of such resistive networks are still in their infancy, efficient simulations of these networks are of utmost importance to assess their potential and scalability as computational models. However, existing simulations either assume linear networks or rely on SPICE, a general purpose, very slow circuit simulator. In this work, we formulate the problem of simulating nonlinear resistive networks as a quadratic programming problem with linear inequality constraints, and we introduce a novel, very fast, exact coordinate descent algorithm to simulate them. Compared to prior SPICE simulations, we train networks that are up to 325 times larger while keeping the duration of a single training epoch 150 times shorter, resulting in a 50000-fold enhancement in the network size to epoch duration ratio. We contend that our significant algorithmic speedup can foster more rapid progress in nonlinear resistive network research.

Live content is unavailable. Log in and register to view live content