Stochastic gradient descent activity noted in Xi’ an’s Og

The reader of this blog is now acquainted with some blogs that I follow and sometimes repost here. One of these is from a French Statistician and Data Science guru that posts regularly in the unusually named blog Xi’ an’s OG. Here it is a brief biographical description of the author of this Blog, which sports a somewhat mysterious identity style of Blogging. Something that pays some tribute to creative ways of presenting an identity in the Internet of perils witnessed and uncertain.

Anyway the last post from Xi’ an’s OG was about the flurry of activity with publication of papers about the important Machine Learning technique of Stochastic Gradient Descent (SGD is different from the standard Gradient Descent in that it economize by subampling within a very large training dataset) . This activity confirms a belief about optimization or sampling techniques in Machine Learning as the main mathematical/statistical support for its development, or that practitioners and researchers continue to insist on these techniques with further commitment even if results appear to lead to not that better outcomes or discoveries. These algorithms still have some convergence and other efficiency issues on implementation, but recurrently everyone comes back here for further insight. The brief comments that the author posted triggered the will for this repost, where the highly technical content subject should be further researched by the interested:

gradient_descent_1
GRADIENT DESCENT – SACHIN JOGLEKAR’S BLOG

the invasion of the stochastic gradients

 

  1. Stochastic Gradient Descent as Approximate Bayesian inference, by Mandt, Hoffman, and Blei, where this technique is used as a type of variational Bayes method, where the minimum Kullback-Leibler distance to the true posterior can be achieved. Rephrasing the [scalable] MCMC algorithm of Welling and Teh (2011) as such an approximation.

 

2. Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent, by Arnak Dalalyan, which establishes a convergence of the uncorrected Langevin algorithm to the right target distribution in the sense of the Wasserstein distance. (Uncorrected in the sense that there is no Metropolis step, meaning this is a Euler approximation.) With an extension to the noisy version, when the gradient is approximated eg by subsampling. The connection with stochastic gradient descent is thus tenuous, but Arnak explains the somewhat disappointing rate of convergence as being in agreement with optimisation rates.

3. Stein variational adaptive importance sampling, by Jun Han and Qiang Liu, which relates to our population Monte Carlo algorithm, but as a non-parametric version, using RKHS to represent the transforms of the particles at each iteration. The sampling method follows two threads of particles, one that is used to estimate the transform by a stochastic gradient update, and another one that is used for estimation purposes as in a regular population Monte Carlo approach. Deconstructing into those threads allows for conditional independence that makes convergence easier to establish. (A problem we also hit when working on the AMIS algorithm.)
The comments by the author of Xi’ an’s OG are important for the way that the topics mentioned are conceptualized rigorously. When we read many articles about a particular subject, which happen also to be a highly technical one, we may lose some context or perspective of what it is about or the complexity of the subject. The qualified knowledgeable minds help us to get back to the proper frame of mind.
featured image: Deeplearning4j Updaters Explained
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s