From the Import AI blog: a Vision-Based High Speed Driving with a Deep Dynamic Observer or how Self-driving cars will drive off-roads

  This is a re-sharing from the excellent weekly newsletter I receive from the Import AI blog written by Jack Clark. There are several other re-posts such as this one in this blog, and I usually decide for it when I see and feel it that the choices of papers or relevant articles, resources and … Continue reading From the Import AI blog: a Vision-Based High Speed Driving with a Deep Dynamic Observer or how Self-driving cars will drive off-roads

Advertisements

Papers with Code series: GAN dissection or visualizing and understanding Generative Adversarial Networks

Generative adversarial networks (GANs) are one of the most important milestones in the field of artificial neural networks. Out of trying to improve the training and efficiency of deep convolutional neural networks used in some challenging computer vision tasks, emerged this technique which has become state-of-the-art for neural networks in general. But there are still … Continue reading Papers with Code series: GAN dissection or visualizing and understanding Generative Adversarial Networks

Learning to learn: meta-learning a way to reinforce efficiency of multi-tasks for robots

As the title of this post suggests, learning to learn is defined as the concept of meta-learning. This new concept was originally introduced by a paper called Model-Agnostic Meta-Learning for fast adaptation of Deep Networks, a paper co-authored by Chelsea Finn, Peter Abbeel and Sergey Levine at University of Berkeley. In the paper it is … Continue reading Learning to learn: meta-learning a way to reinforce efficiency of multi-tasks for robots

Deep Reinforcement Learning class at Berkeley by Sergey Levine – Lecture 16 Bootstrap DQN and Transfer Learning

  This last summer I started joyfully to watch and apprehend as much as possible about the lectures on Deep Reinforcement Learning delivered by Dr. Sergey Levine at the University of Berkeley California. The list of lectures is available at YouTube, but the from the fall of last year 2017. The lectures usually feature some … Continue reading Deep Reinforcement Learning class at Berkeley by Sergey Levine – Lecture 16 Bootstrap DQN and Transfer Learning

A conversation on AI from MIT Artificial General Intelligence Lectures

The Massachusetts Institute of Technology (MIT) has been given a series of lectures titled MIT 6.S099: Artificial General Intelligence. It is part of the syllabus of a course on Artificial General Intelligence and Deep Learning delivered by Lex Fridman. It featured a series of conversations with some prominent researchers in the fields of machine learning, … Continue reading A conversation on AI from MIT Artificial General Intelligence Lectures

Papers with Code series: Reinforcement Learning Decoders for Fault-Tolerant Quantum Computation

  The two fields of Machine Learning and Quantum Computing are the most important ones for today's computer science in general. A new field of study is actually emerging with the appropriate name of Quantum Machine Learning. The important sub-field of Reinforcement Learning is also being used by researchers in Quantum Computing and today's paper … Continue reading Papers with Code series: Reinforcement Learning Decoders for Fault-Tolerant Quantum Computation

Brain-to-Brain online communication: a reality soon…?

  Two Minute Papers is a YouTube and Patreon channel, a website, a good repository of some of the latest developments in artificial intelligence and machine/deep learning. It is hosted by a researcher in the field, and given his background most of the content is about computer vision, computer graphics and the applications of these … Continue reading Brain-to-Brain online communication: a reality soon…?

Papers with Code Series: Self-Attention Generative Adversarial Networks

Hello. I am starting today a new series of posts here in The Intelligence of Information. I know there is this hiatus of several months without posting here in this blog. I may have said the reasons for this, so I will skip ahead. Just to remind: this still is a work in progress blog, … Continue reading Papers with Code Series: Self-Attention Generative Adversarial Networks

A Re-post with courtesy from Quantum Bayesian Networks: IBM and Google Caught off Guard by Rigetti Spaghetti — Quantum Bayesian Networks

Recently, Rigetti, the quantum computer company located in Berkeley, CA, made some bold promises that probably caught IBM and Google off guard, as in the following gif Last month (on Aug 8), Rigetti promised a 128 qubit gate model chip “over the next 12 months”. [comment: Quite ambitious. It may turn out that Rigetti cannot […] … Continue reading A Re-post with courtesy from Quantum Bayesian Networks: IBM and Google Caught off Guard by Rigetti Spaghetti — Quantum Bayesian Networks

Required share fom The Morning Paper: Snorkel: rapid training data creation with weak supervision — the morning paper

Snorkel: rapid training data creation with weak supervision Ratner et al., VLDB’18 Earlier this week we looked at Sparser, which comes from the Stanford Dawn project, “a five-year research project to democratize AI by making it dramatically easier to build AI-powered applications.” Today’s paper choice, Snorkel, is from the same stable. It tackles one of […] … Continue reading Required share fom The Morning Paper: Snorkel: rapid training data creation with weak supervision — the morning paper