Hi! New AI Weekly is here! Enjoy your weekend reading AI news and don’t forget to share it with your friends 😉
Chinese AI beats 15 doctors in tumor diagnosis competition – An AI system has wiped the floor with some of China’s top doctors when it comes to diagnosing brain tumors and predicting hematoma expansion. As reported by Xinhua, the system defeated a team comprised of 15 of China’s top doctors by a margin of two to one. The AI, BioMind, was developed by the Artificial Intelligence Research Centre for Neurological Disorders at Beijing Tiantan Hospital, and is another example of the long line of the technology analyzing images.
No, you don’t need ML/AI. You need SQL
NVidia TensorRT: high-performance deep learning inference accelerator – Chris Gottbrath from NVidia and X.Q. from the Google Brain team talks about NVidia TensorRT. NVidia TensorRT is a high-performance, programmable inference accelerator that delivers low latency and high-throughput for deep learning applications. Developers can create neural networks and artificial intelligence to run their networks in productions or devices with the full performance that GPUs can offer.
An Introduction to Biomedical Image Analysis with TensorFlow and DLTK – DLTK, the Deep Learning Toolkit for Medical Imaging extends TensorFlow to enable deep learning on biomedical images. It provides specialty ops and functions, implementations of models, tutorials and code examples for typical applications. This blog post serves as a quick introduction to deep learning with biomedical images, where we will demonstrate a few issues and solutions to current engineering problems and show you how to get up and running with a prototype for your problem.
A New Angle on L2 Regularization
Learning Montezuma’s Revenge from a Single Demonstration – OpenAI trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result. Their algorithm is simple: the agent plays a sequence of games starting from carefully chosen states from the demonstration, and learns from them by optimizing the game score using PPO, the same reinforcement learning algorithm that underpins OpenAI Five.
Nvidia Opens GPUs for AI Work with Containers, Kubernetes – the rise of machine learning and artificial intelligence have put Nvidia on a roll. With GPUs becoming more important than ever, the chip maker is firing from all guns. Academic institutions, large cloud providers and enterprises are all relying on Nvidia’s GPUs for running ML and HPC workloads. Despite the popularity and demand of GPUs, installing, configuring, and integrating an end-to-end Nvidia GPU stack is not easy. It all starts with the installation of the CUDA and cuDNN drivers. A minor version difference of any of these layers can break the configuration. Add the version incompatibilities and dependencies of deep learning frameworks to the mix, and it becomes a mess.
To ease this process, Nvidia has turned to containers. It has integrated a runtime with Docker that’s specific to GPUs.
Capture the Flag: the emergence of complex cooperative agents – Mastering the strategy, tactical understanding, and team play involved in multiplayer video games represents a critical challenge for AI research. Now, through new developments in reinforcement learning, DeepMind agents have achieved human-level performance in Quake III Arena Capture the Flag, a complex multi-agent environment and one of the canonical 3D first-person multiplayer games. These agents demonstrate the ability to team up with both artificial agents and human players.
Ranked Reward: Enabling Self-Play Reinforcement Learning for Combinatorial Optimization – authors present the Ranked Reward (R2) algorithm which accomplishes this by ranking the rewards obtained by a single agent over multiple games to create a relative performance metric. Results from applying the R2 algorithm to instances of a two-dimensional bin packing problem show that it outperforms generic Monte Carlo tree search, heuristic algorithms and reinforcement learning algorithms not using ranked rewards.
Neural Processes – a neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a distribution over possible functions, and is updated in light of data via the rules of probabilistic inference. GPs are probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability. Authors introduce a class of neural latent variable models which they call Neural Processes (NPs), combining the best of both worlds. Like GPs, NPs define distributions over functions, are capable of rapid adaptation to new observations, and can estimate the uncertainty in their predictions. Like NNs, NPs are computationally efficient during training and evaluation but also learn to adapt their priors to data. They demonstrate the performance of NPs on a range of learning tasks, including regression and optimisation, and compare and contrast with related models in the literature.
Amazon sales rank data for print and kindle books – 61,000 unique ASINs and 200,000,000 salesrank data points (JSON / CSV)