Computerphile – two videos about deep learning

For the last day of this first February 2017 week, I’ve chosen to share here two nice intuitive presentations about deep learning. This choice follows some period of reflective thinking about the way the blog has been presenting the subject. Deep learning is somewhat of a new field of research and study that, in spite of having a broad and solid background with cutting-edge areas of research in Computer Science and Machine Learning, may seem – actualy be – a bit exoteric and hard to swallow to many untrained minds.

The two videos below are from the YouTube channel about computer science topics Computerphile and they offer two nice intuitive views on two different topics about deep learning. The videos feature an expert that can convey to the more fresh to the field, or to someone not form a quantitative(scientific) and technological background, a nice overview of what deep learning is all about.

The author is Brais Martinez, a Spanish PhD student (the english accent does a proper identification from the outset… ) and he is doing good research work in Computer Vision with deep learning techniques.

The first video

The first video is a nice overview of the simplest possible deep neural net configured to perform prediction on a dataset. It roughly explains the way the features in a neural network relate to the dataset and then used to fit a predictive model, at the same time doing so in the best possible way to avoid over-fitting or other shortcomings (notice the distinction between deep learning and former methods in Machine Learning that used shallow neural networks in comparison with the deep dense layered structure of deep neural networks with its  connected deeper and deeper layers).

Of notice is also the combinatorial nature of the layered structure of a deep neural network that is well explained by Brais Martinez. This is typical supervised learning introductory material.

The second video

The second video, by the same researcher, convey another intuitive explanation of DeepMind achievements with AlphaGo. This time we are introduced to the concepts of reinforcement learning and the unsupervised learning frameworks  (heuristic search algorithms such as Monte Carlo tree search is the precise technique) behind the success of AlphaGo in mastering the game of Go.

Interesting and important the points Dr. Martinez makes about what machines and deep learning algorithms at this moment in time can accomplish as compared with a human expert: machines are becoming better at very specific computational tasks, even ones that are not only brute force calculations and arithmetic combinations, that is, machines are beginning to combine information in ways more similar to proper thinking (whether we can call it thinking and understanding is still open for debate and not yet settled). From this we can imagine a flourishing of theory of computation and the nature of artificial algorithms with further discussions and debates going forward.

featured image: Autonomous University of Barcelona


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s