Recommender systems with TensorFlow

Guillaume Allain gave an interesting talk at the recent PyData London 2017 event. It was about the implementation of recommender systems using TensorFlow. The talk is shared in the YouTube video below. I recommend the reader to also fork the GitHub pull request/repository Tensorflow-based Recommendation systems, where a detailed description of this developement is available as well as all … Continue reading Recommender systems with TensorFlow

Generalization and Equilibrium in Generative Adversarial Nets (GANs) – a talk by Sanjeev Arora

It has been a while after I reviewed my last paper here for The Intelligence of Information.  I've been posting YouTube videos with my own comments and interpretations of what I see and hear, and as a matter of fact it has been rewarding and a somewhat more interactive way to present the topics this … Continue reading Generalization and Equilibrium in Generative Adversarial Nets (GANs) – a talk by Sanjeev Arora

Two videos about the new Google’s tensor processing units (TPU) chips

The new Google hardware venture is a somewhat surprising but important and logical business move. It is surprising given the mostly software-centric business model Google has maintained over the years. But is logical form a strategic and tactical perspective to have a capacity to design and control your own hardware, saving costs and building the … Continue reading Two videos about the new Google’s tensor processing units (TPU) chips

TensorFlow Dev Summit 2017: Integrating Keras and TensorFlow

I am briefly sharing a video from the last TensorFlow Dev Summit in February 2017. My choice has fallen to a presentation by François Chollet of the deep learning library API Keras and its integration with TensorFlow. As Dr. Chollet explains, Keras integrated with TensorFlow promises to streamline deep learning frameworks in ways that will … Continue reading TensorFlow Dev Summit 2017: Integrating Keras and TensorFlow

Visualizing recurrent neural networks: a funny talk at RE.WORK 2016 San Francisco

Andrej Karpathy is a PhD student (maybe completed PhD...) from Stanford University. He is involved with a PhD about deep neural networks, specifically convolutional neural networks and recurrent neural networks. He also joined the OpenAI research initiative as one of its prominent researchers. He delivered a nice, relaxed and funny talk at last year's RE.WORK … Continue reading Visualizing recurrent neural networks: a funny talk at RE.WORK 2016 San Francisco