Paper with Code Series: Adversarial Latent Autoencoders

Generative Adversarial Networks continues to be one of the main deep learning techniques in current computer vision with machine learning developments. But they have been shown to have some issues regarding the quality of images it outputs from a generator's map input space. This may have some causal explanation with the way GANs process of … Continue reading Paper with Code Series: Adversarial Latent Autoencoders

Quantum Reasoning, human cognition and Artificial Intelligence

Quantum reasoning and the formalism of Lie Algebras are fascinating topics in Quantum Mechanics. By quantum reasoning we are referring to the way the human brain constructs its thoughts and cognitions in an ordered fashion, that is, in the way mathematical psychology has researched the implication of quantum reality on the brain cognitive processes. Quantum … Continue reading Quantum Reasoning, human cognition and Artificial Intelligence

Inverse planning and Theory of Mind: understanding behavior in groups of agents

Human social intelligence is one of the hallmarks of how we judge general intelligence. It can be quite hard to grasp and appreciate fully. Even harder is to know where, how and why it is important for many social outcomes for many of us. Underlying the basis of human social intelligence is what standard developmental … Continue reading Inverse planning and Theory of Mind: understanding behavior in groups of agents

State-of-the-art in self-driving cars and autonomous vehicles: a list of videos from MIT

Lex Fridman's MIT list of videos about self-driving cars was published since the beginning of this year. I was trying to get my schedule right in this blog to start posting on this list, and now the time has come. The videos follow and complement the page on the course from MIT MIT 6.S094: Deep Learning … Continue reading State-of-the-art in self-driving cars and autonomous vehicles: a list of videos from MIT

Paper with Code Series: Semantic Image Synthesis with Spatially-Adaptive Normalization

Recently I have found some interesting papers and analysis about the issue of semantic synthesis and segmentation used both for natural language processing and for advanced computer vision imaging. I tended to be skeptical of this use of the word 'semantic', but then I realized that within the field of computer science, the term is … Continue reading Paper with Code Series: Semantic Image Synthesis with Spatially-Adaptive Normalization

From the Import AI blog: a Vision-Based High Speed Driving with a Deep Dynamic Observer or how Self-driving cars will drive off-roads

  This is a re-sharing from the excellent weekly newsletter I receive from the Import AI blog written by Jack Clark. There are several other re-posts such as this one in this blog, and I usually decide for it when I see and feel it that the choices of papers or relevant articles, resources and … Continue reading From the Import AI blog: a Vision-Based High Speed Driving with a Deep Dynamic Observer or how Self-driving cars will drive off-roads

Papers with Code series: GAN dissection or visualizing and understanding Generative Adversarial Networks

Generative adversarial networks (GANs) are one of the most important milestones in the field of artificial neural networks. Out of trying to improve the training and efficiency of deep convolutional neural networks used in some challenging computer vision tasks, emerged this technique which has become state-of-the-art for neural networks in general. But there are still … Continue reading Papers with Code series: GAN dissection or visualizing and understanding Generative Adversarial Networks

Learning to learn: meta-learning a way to reinforce efficiency of multi-tasks for robots

As the title of this post suggests, learning to learn is defined as the concept of meta-learning. This new concept was originally introduced by a paper called Model-Agnostic Meta-Learning for fast adaptation of Deep Networks, a paper co-authored by Chelsea Finn, Peter Abbeel and Sergey Levine at University of Berkeley. In the paper it is … Continue reading Learning to learn: meta-learning a way to reinforce efficiency of multi-tasks for robots

Deep Reinforcement Learning class at Berkeley by Sergey Levine – Lecture 16 Bootstrap DQN and Transfer Learning

  This last summer I started joyfully to watch and apprehend as much as possible about the lectures on Deep Reinforcement Learning delivered by Dr. Sergey Levine at the University of Berkeley California. The list of lectures is available at YouTube, but the from the fall of last year 2017. The lectures usually feature some … Continue reading Deep Reinforcement Learning class at Berkeley by Sergey Levine – Lecture 16 Bootstrap DQN and Transfer Learning

A conversation on AI from MIT Artificial General Intelligence Lectures

The Massachusetts Institute of Technology (MIT) has been given a series of lectures titled MIT 6.S099: Artificial General Intelligence. It is part of the syllabus of a course on Artificial General Intelligence and Deep Learning delivered by Lex Fridman. It featured a series of conversations with some prominent researchers in the fields of machine learning, … Continue reading A conversation on AI from MIT Artificial General Intelligence Lectures