Skip to yearly menu bar Skip to main content


Poster

Neuro-Symbolic Temporal Point Processes

Yang Yang · Chao Yang · Boyang Li · Yinghao Fu · Shuang Li


Abstract:

Our goal is to efficiently discover a compact set of temporal logic rules to explain irregular events of interest. We introduce a neural-symbolic rule induction framework within the temporal point process model. The negative log-likelihood is the loss that guides the learning, where the explanatory logic rules and their weights are learned end-to-end in a differentiable way. Specifically, predicates and logic rules are represented as vector embeddings, where the predicate embeddings are fixed and the rule embeddings are trained via gradient descent to obtain the most appropriate compositional representations of the predicate embeddings. To make the rule learning process more efficient and flexible, we adopt a sequential covering algorithm, which progressively adds rules to the model and removes the event sequences that have been explained until all event sequences have been covered. All the found rules will be fed back to the models for a final rule embedding and weight refinement. Our approach showcases notable efficiency and accuracy across synthetic and real datasets, surpassing state-of-the-art baselines by a wide margin in terms of efficiency.

Live content is unavailable. Log in and register to view live content