edgartechtumbsup
edgartechtumbsup
Edgar Tech TBup
156 posts
Don't wanna be here? Send us removal request.
edgartechtumbsup · 5 years ago
Text
Paper with Code Series: Adversarial Latent Autoencoders
Paper with Code Series: Adversarial Latent Autoencoders
Generative Adversarial Networks continues to be one of the main deep learning techniques in current computer vision with machine learning developments. But they have been shown to have some issues regarding the quality of images it outputs from a generator’s map input space. This may have some causal explanation with the way GANs process of encoding-decoding from a known probability distribution…
View On WordPress
0 notes
edgartechtumbsup · 5 years ago
Text
Quantum Reasoning, human cognition and Artificial Intelligence
Quantum Reasoning, human cognition and Artificial Intelligence
Quantum reasoning and the formalism of Lie Algebras are fascinating topics in Quantum Mechanics. By quantum reasoning we are referring to the way the human brain constructs its thoughts and cognitions in an ordered fashion, that is, in the way mathematical psychology has researched the implication of quantum reality on the brain cognitive processes. Quantum Physics is a field of physical sciences…
View On WordPress
0 notes
edgartechtumbsup · 6 years ago
Text
Inverse planning and Theory of Mind: understanding behavior in groups of agents
Inverse planning and Theory of Mind: understanding behavior in groups of agents
Human social intelligence is one of the hallmarks of how we judge general intelligence. It can be quite hard to grasp and appreciate fully. Even harder is to know where, how and why it is important for many social outcomes for many of us. Underlying the basis of human social intelligence is what standard developmental psychology terms Theory of Mind. This concept is at the heart of social…
View On WordPress
0 notes
edgartechtumbsup · 6 years ago
Text
State-of-the-art in self-driving cars and autonomous vehicles: a list of videos from MIT
State-of-the-art in self-driving cars and autonomous vehicles: a list of videos from MIT
Lex Fridman’s MIT list of videos about self-driving cars was published since the beginning of this year. I was trying to get my schedule right in this blog to start posting on this list, and now the time has come. The videos follow and complement the page on the course from MIT MIT 6.S094: Deep Learning for Self-Driving Cars, which is essentially the same of last year’s and from the same author…
View On WordPress
0 notes
edgartechtumbsup · 6 years ago
Text
Paper with Code Series: Semantic Image Synthesis with Spatially-Adaptive Normalization
Recently I have found some interesting papers and analysis about the issue of semantic synthesis and segmentation used both for natural language processing and for advanced computer vision imaging. I tended to be skeptical of this use of the word ‘semantic’, but then I realized that within the field of computer science, the term is related with a formal treatment of how programming languages…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Text
From the Import AI blog: a Vision-Based High Speed Driving with a Deep Dynamic Observer or how Self-driving cars will drive off-roads
From the Import AI blog: a Vision-Based High Speed Driving with a Deep Dynamic Observer or how Self-driving cars will drive off-roads
  This is a re-sharing from the excellent weekly newsletter I receive from the Import AI blog written by Jack Clark. There are several other re-posts such as this one in this blog, and I usually decide for it when I see and feel it that the choices of papers or relevant articles, resources and else in the extensive research agenda of deep learning or artificial intelligence is an appropriate one.…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Text
Papers with Code series: GAN dissection or visualizing and understanding Generative Adversarial Networks
Papers with Code series: GAN dissection or visualizing and understanding Generative Adversarial Networks
Generative adversarial networks (GANs) are one of the most important milestones in the field of artificial neural networks. Out of trying to improve the training and efficiency of deep convolutional neural networks used in some challenging computer vision tasks, emerged this technique which has become state-of-the-art for neural networks in general. But there are still some difficulties with…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Text
Learning to learn: meta-learning a way to reinforce efficency of multi-tasks for robots
Learning to learn: meta-learning a way to reinforce efficency of multi-tasks for robots
As the title of this post suggests, learning to learn is defined as the concept of meta-learning. This new concept was originally introduced by a paper called Model-Agnostic Meta-Learning for fast adaptation of Deep Networks, a paper co-authored by Chelsea Finn, Peter Abbeel and Sergey Levine at University of Berkeley. In the paper it is claimed tha it is possible to design a meta-learning…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Video
Deep Reinforcement Learning class at Berkeley by Sergey Levine – Lecture 16 Bootstrap DQN and Transfer Learning This last summer I started joyfully to watch and apprehend as much as possible about the lectures on Deep Reinforcement Learning delivered by…
0 notes
edgartechtumbsup · 7 years ago
Video
A conversation on AI from MIT Artificial General Intelligence Lectures The Massachusetts Institute of Technology (MIT) has been given a series of lectures titled MIT 6.S099: Artificial General Intelligence…
0 notes
edgartechtumbsup · 7 years ago
Text
Paper with Code series: Reinforcement Learning Decoders for Fault-Tolerant Quantum Computation
Paper with Code series: Reinforcement Learning Decoders for Fault-Tolerant Quantum Computation
  The two fields of Machine Learning and Quantum Computing are the most important ones for today’s computer science in general. A new field of study is actually emerging with the appropriate name of Quantum Machine Learning. The important sub-field of Reinforcement Learning is also being used by researchers in Quantum Computing and today’s paper choice by the Paper with Code series is all about…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Text
Brain-to-Brain online communication: a reality soon...?
Brain-to-Brain online communication: a reality soon…?
  Two Minute Papers is a YouTube and Patreon channel, a website, a good repository of some of the latest developments in artificial intelligence and machine/deep learning. It is hosted by a researcher in the field, and given his background most of the content is about computer vision, computer graphics and the applications of these technological developments in very human activities like art,…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Text
Papers with Code Series: Self-Attention Generative Adversarial Networks
Papers with Code Series: Self-Attention Generative Adversarial Networks
Hello. I am starting today a new series of posts here in The Intelligence of Information. I know there is this hiatus of several months without posting here in this blog. I may have said the reasons for this, so I will skip ahead. Just to remind: this still is a work in progress blog, always open to anyone who would like to suggest improvements, collaborate or post, alone or with me a…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Photo
Tumblr media
A Re-post with courtesy from Quantum Bayesian Networks: IBM and Google Caught off Guard by Rigetti Spaghetti — Quantum Bayesian Networks Recently, Rigetti, the quantum computer company located in Berkeley, CA, made some bold promises that probably caught IBM and Google off guard, as in the following gif Last month (on Aug 8), Rigetti promised a 128 qubit gate model chip “over the next 12 months”. via IBM and Google Caught off Guard by Rigetti Spaghetti — Quantum Bayesian Networks
0 notes
edgartechtumbsup · 7 years ago
Photo
Tumblr media
Required share fom The Morning Paper: Snorkel: rapid training data creation with weak supervision — the morning paper Snorkel: rapid training data creation with weak supervision Ratner et al., VLDB’18 Earlier this week we looked at Sparser, which comes from the Stanford Dawn project, “a five-year research project to democratize AI by making it dramatically easier to build AI-powered applications.” Today’s paper choice, Snorkel, is from the same stable. It tackles one of […] via Snorkel: rapid training data creation with weak supervision — the morning paper
0 notes
edgartechtumbsup · 7 years ago
Text
Import AI 106: Tencent breaks ImageNet training record with 1000+ GPUs; augmenting the Oxford RobotCar dataset; and PAI adds more members — Import AI
Import AI 106: Tencent breaks ImageNet training record with 1000+ GPUs; augmenting the Oxford RobotCar dataset; and PAI adds more members — Import AI
What takes 2048 GPUs, takes 4 minutes to train, and can identify a seatbelt with 75% accuracy? Tencent’s new deep learning model: …Ultrafast training thanks to LARS, massive batch sizes, and a field of GPUS… As supervised learning techniques become more economically valuable, researchers are trying to reduce the time it takes to train deep […]
via Import AI 106: Tencent breaks ImageNet training…
View On WordPress
0 notes
edgartechtumbsup · 7 years ago
Text
Latest DeepMind research on computer vision and scene rendering
Latest DeepMind research on computer vision and scene rendering
  The latest DeepMind research paper on computer vision [1]and neural scene rendering appears to be ground breaking and a milestone for the field of computer vision. For anyone already acquainted with some knowledge of the application of deep neural networks for computer vision will know the training process of those networks requires the input features of an image to be labeled by human work. In…
View On WordPress
0 notes