Fine-tuning of parameters or hyperparameters is one of the most important and difficult tasks for Machine/Deep learning algorithms in order for them to perform well. This is indeed one usual pain points for the practitioners in those fields. One technique, among others, widely used to tackle such issues is Bayesian Optimization, which involves an automatic algorithm configuration that skews away the human error element within an extremely complex algorithmic environment. In this we see that automation helps us humans get around insurmountable complexity, instead of being anything that is replacing or destroying the human effort. When the level of complexity is beyond some threshold, machines are welcome aids, not some threat to our esteemed efforts and jobs.
Recently some researchers published a paper on applying Bayesian optimization techniques to accomplish the goal fo efficiently automating the fine-tuning of parameters in a deep learning complex algorithmic setting, properly named Taking the Human Out of the Loop: A Review of Bayesian Optimization. This is interesting g read about the need for an automated process of software development in Machine/Deep learning, especially when dealing with very large datasets. This paper is from the beginning of 2016, but it is still a required read.
From the RE.WORK deep learning summit conference in the months before the publication of the mentioned paper, from 2015, there was a talk by one of its authors Ryan Adams a researcher in deep learning from Harvard University. Ryan is also a Physics PhD and a prominent practitioner in the field, especially involved with Bayesian Optimization. The YouTube video of the talk is available below and after we watch it we will comment further:
On hearing the first phrases by Ryan we are warned about how the increasing complexity and expert knowledge in the field of machine/deep learning is putting a lot of talented folks off the radar, as the skill set required has become steep. This goes in some way to explain also the dearth of talent the field is still experiencing, a kind of strange anxiety loop… of shortage of talent within increasing demand. Anyway the training of appropriate already knowledgeable personnel should mitigate this.
The middle of the talk informs us as to why Bayesian Optimization techniques weren’t used very often by former practitioners of deep learning in the past, even if it has such a long history behind. Dr. Ryan Adams thinks that the need to carefully chose point estimates in the accurate modeling of Bayesian Optimization, a cumbersome hard-coded process, was a deterrent of the technique in the past rendered it too complicated, fragile and unsuited for rapid robust decision-making. But that is precisely what his own research efforts, as well of others, within Markov Chain Monte Carlo (MCMC) for kernel parameterization, automating and integrate out the process is considered an important improvement in turning Bayesian Optimization a viable computational technique for machine/deep learning pipelines. From this the parallelization of Bayesian Optimization and the multi-task Bayesian Optimization are described as the way forward, and our realities today.
.As a final word the presentation by Dr. Ryan Adams ends with some interactive slides with animations of implementations of Bayesian Optimization to robotics, in the context of simulation of animal (cheetah) motor movements (mechanical engineering). It also discloses that Ryan is involved in a start-up (WhetLab.com) that intends to leverage these technologies for a plethora of business applications, hinting at the productivity gains accomplished by applying these techniques, where 5 lines of Python code can do a lot of magic (quite a feat for a hard and difficult to understand computational framework).