Skip to yearly menu bar Skip to main content


Poster

On the Hardness of Probabilistic Neurosymbolic Learning

Jaron Maene · Vincent Derkinderen · Luc De Raedt


Abstract:

The limitations of purely neural approaches have sparked an interest in neurosymbolic artificial intelligence. Many popular methods in this domain essentially combine neural networks with probabilistic reasoning. As these neurosymbolic models are trained with gradient descent, we study the complexity of differentiating probabilistic reasoning. We prove that although approximating the gradients is intractable when the neural networks are randomly initialized, it can become tractable during training. Furthermore, we introduce WeightME, a novel unbiased gradient estimator. Under mild assumptions, WeightME approximates the gradient with probabilistic guarantees using a logarithmic number of calls to a SAT solver. Finally, we evaluate whether these guarantees on the gradient are actually necessary. Our experiments indicate that on challenging reasoning benchmarks, the existing biased approximations indeed have trouble optimizing even when exact solving is still feasible.

Live content is unavailable. Log in and register to view live content