Machine Learning problems are mainly mathematical optimization problems. The computational nature of these problems is the part that links Machine Learning to computation and Artificial Intelligence. We should be reminded of that, for any other reasons, the possibility of improvement in mathematical optimization is infinite or unbounded.
The Video I encountered today from a talk delivered by Navin Goyal from Microsoft Research reminded me of that and I decided it to be a good fit to the end of this week for The Intelligence of Information. It is a presentation of The Polytope Learning Problem, quite rightly a mathematical optimization problem. In the video we learn that learning convex bodies is a hard optimization problem due to its exponential scaling with the number of dimensions. The speaker then points to some solution possibilities starting by giving stronger access to the algorithm through a membership oracle in random points (this is an instance of computational complexity theory in a learning problem).
The introduction of Computational Complexity Theory frameworks like PAC learning (Probably approximately correct learning) in Machine Learning research pipelines have been a success story so far. But the understanding and implementation details are still mysterious. For instance there is not a way to know how to efficiently PAC-learn intersections of two half-spaces.
The next part of the talk listed some noteworthy points. One is that neural networks are not suited to solve the polytope learning problem, at least current architectures. The next part was a nice presentation to Independent Component Analysis, applicable in signal processing and other areas.
A final word to the topic of tensor decomposition methods that has recently been popular with latent variable models. For example learning simplices turns out to be a special case of a Latent Dirichlet Allocation (LDA) that happens to be learned by tensor decomposition.
What I found in this talk worthwhile to share here is how several seemingly disparate fields of study come together to tackle a difficult computational problem: mathematical optimization, machine/statistical learning, information theory, signal processing (Radon transform) and geometry all seem to share some insight in the way to solve the learning polytope problem.
featured image: ANALYSIS OF CONVEX RELAXATIONS FOR MAP ESTIMATION