Today I continue the interesting papers from NIPS 2016 series. But this time the post will be different and shorter. The NIPS conference isn’t just a venue where it is presented the latest research paper on a topic briefly without further details. The researchers also present a video with a slides presentation application summarizing the paper with the key concepts and insights.
Today I share one of those presentations that is in the long list of Youtube videos of all the presentations featured during the conference. And the link with the paper and the abstract can be found below the video.
One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.
This paper (and the references within) may deserve a closer look and critical review. This will probably be a task for a later post.