Skip to yearly menu bar Skip to main content


Poster

Neural Tangent Kernels for Axis-Aligned Tree Ensembles

Ryuichi Kanoh · Mahito Sugiyama


Abstract:

While axis-aligned rules are known to induce an important inductive bias in machine learning models such as typical hard decision tree ensembles, theoretical understanding of the learning behavior is largely unrevealed due to the discrete nature of rules. To address this issue, we impose the axis-aligned constraint on soft trees, which relax the splitting process of decision trees and are trained using a gradient method, and present their Neural Tangent Kernel (NTK) that enables us to analytically describe the training behavior. We study two cases: imposing the axis-aligned constraint throughout the entire training process, or only at the initial state. Moreover, we extend the NTK framework to handle various tree architectures simultaneously, and prove that any axis-aligned non-oblivious tree ensemble can be transformed into an axis-aligned oblivious tree ensemble with the same NTK. One can search for suitable tree architecture via Multiple Kernel Learning (MKL), and our numerical experiments show a variety of suitable features depending on the type of constraints, which supports not only the theoretical but also the practical impact of the axis-aligned constraint in tree ensemble learning.

Live content is unavailable. Log in and register to view live content