It has been a while after I reviewed my last paper here for The Intelligence of Information. I’ve been posting YouTube videos with my own comments and interpretations of what I see and hear, and as a matter of fact it has been rewarding and a somewhat more interactive way to present the topics this blog cares about. Not to say that it is better than the more conceptually demanding and more involved way of reviewing written papers; but it is a more dry approach for a broader audience. I must also acknowledge that I am not currently in a 100% confident position to review the state-of-the-art in edge cutting topics in deep learning or machine learning/data science. Some of the papers I reviewed for this blog in the past were about topics that I only later manage to fully comprehend, or at least have some deeper lights of what was actually involved. But I am making some progress…
To continue this trend I would like to share a video again today. This time from one other well qualified Computer Scientist from Princeton University, Dr. Sanjeev Arora. Dr. Arora gave this talk about Generalization and Equilibrium in Generative Adversarial Nets (GANs) at the Simons Institute. This is another cutting edge topic in current deep neural networks research agendas, and obviously it fits proper with this Blog alma mater.
This is an interesting video from an interpretation (Represenattion Learning) point of view about generative adversarial nets. The initial remarks about what Dr. Sanjeev Arora termed Geoff’s Neural Net Hypothesis and its inconsistency with the curse of dimensionality is worth the 43 minutes of our/your time watching it. It occurred in the first 5 minutes. One corollary of the thinking by Dr. Arora is that real life data distributions are special in some way and that deep neural networks are tuned in some way to these distributions, even if we do not really know how, as of yet. We might be witnessing something deep and fundamental about the nature of networks in complex high-dimensional systems displaying similar properties and…. attracting each other.
The issues addressed in the rest of the talk were:
- Generalization: is the real discriminator roughly equal to the synthetic discrimator (once it has lost to the generator)?
- Equilibrium: of game-theoretic conceptual source in a 2-player game… it may not exist
- Bounded capacity discriminators are weak: this introduces limitations for the generalization capacity of the whole generator/discriminator set up, as it is properly explained in the video.
- … but partially there is some good news about the number of samples > nlogn/εˆ2, and this leading to generalization happening with respect to neural distance
I was a bit surprised by the connection of generative adversarial nets and advanced research in Game Theory, which is a branch of Economics. But this shouldn’t be so surprising, given the current highly quantitative character of cutting edge research in Economics. The points Dr. Sanjeev Arora talks about in equilibrium conditions for deep neural nets reminded me of the difficulty in understanding and have a solid grasp of the notion of equilibrium, both form a macroeconomic perspective and from a thermodynamics perspective.
That is it for this last post of the week. I will see if I can get back to reviewing good papers, even if the pace of production and my own progress in this endeavour to have been put to a bit of a reality check stressors lately… 😕 😊