Published: Jan. 15, 2019

"Don't go with the flow -- – A new tensor algebra for Neural Networks"

Multi-dimensional information often involves multi-dimensional correlations that may remain latent by virtue of traditional matrix-based learning algorithms. In this study, we propose a tensor neural network framework that offers an exciting new paradigm for supervised machine learning. The tensor neural network structure is based upon the t-product (Kilmer and Martin, 2011), an algebraic formulation to multiply tensors via circulant convolution which inherits mimetic matrix properties. We demonstrate that our tensor neural network architecture is a natural high-dimensional extension to conventional neural networks. Then, we expand upon (Haber and Ruthotto, 2017) interpretation of deep neural networks as discretizations of nonlinear differential equations, to construct intrinsically stable tensor neural network architectures. We illustrate the advantages of stability and demonstrate the potential of tensor neural networks with numerical experiments on the MNIST dataset.

 

Bio: Lior Horesh is the Manager of the Mathematics of AI group of IBM TJ Watson Research Center as well as an IBM Master Inventor. Dr. Horesh also holds an Adjunct Associate Professor in the Computer Science Department of Columbia University, teaching graduate level Advanced Machine Learning and Quantum Computing Theory and Practice courses. His expertise lies at large-scale modeling, inverse problems, tensor algebra, experimental design and quantum computing. His recent research focuses on the interplay between first-principles and data-driven methods.