AI Weekly 19 May 2018

//AI Weekly 19 May 2018

AI Weekly 19 May 2018

Hi! New AI Weekly is here! Enjoy your weekend reading AI news and don’t forget to share it with your friends 😉

GENERAL

Why Is the Human Brain So Efficient? – How massive parallelism lifts the brain’s performance above that of AI.
http://bit.ly/2k7cWVg

AI and Compute – OpenAI released an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.
http://bit.ly/2Izo3B1

How artificial intelligence is changing science – Artificial intelligence is now part of our daily lives, whether in voice recognition systems or route finding apps. But scientists are increasingly drawing on artificial intelligence to understand society, design new materials and even improve our health.
https://stanford.io/2IxdYso

How the Enlightenment Ends – Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.
https://theatln.tc/2KzrdVO

Finland offers free online Artificial Intelligence course to anyone, anywhere – Helsinki University hopes that one percent of the Finnish population – some 54,000 people – will take the online course this year. So far 24,000 have signed up.
http://bit.ly/2x1yUCC

LEARNING

Smart Compose: Using Neural Networks to Help Write Emails – Last week Google introduced Smart Compose, a new feature in Gmail that uses machine learning to interactively offer sentence completion suggestions as you type, allowing you to draft emails faster. Building upon technology developed for Smart Reply, Smart Compose offers a new way to help you compose messages — whether you are responding to an incoming email or drafting a new one from scratch.
http://bit.ly/2LbfVZ0

Advances in Semantic Textual Similarity – The recent rapid progress of neural network-based natural language understanding research, especially on learning semantic text representations, can enable truly novel products such as Smart Compose and Talk to Books. It can also help improve performance on a variety of natural language tasks which have limited amounts of training data, such as building strong text classifiers from as few as 100 labeled examples. In this post authors discuss two papers reporting recent progress on semantic representation research at Google, as well as two new models available for download on TensorFlow Hub that they hope developers will use to build new applications.
http://bit.ly/2k7WRi9

VIDEOS

Deep Learning with Ensembles of Neocortical Microcircuits – Dr. Blake Richards
http://bit.ly/2wVdqa7

PROGRAMMING

TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
http://bit.ly/2rRtfJk

TensorFlow workshop notebooks from Google
http://bit.ly/2IBIxcb

PAPERS

Introducing state of the art text classification with universal language models – This post is a lay-person’s introduction to new paper, which shows how to classify documents automatically with both higher accuracy and less data requirements than previous approaches. Authors explain in simple terms: natural language processing; text classification; transfer learning; language modeling; and how their approach brings these ideas together.
http://bit.ly/2wTJivX

By |2018-05-19T19:53:26+00:00May 19th, 2018|AI Weekly|0 Comments

About the Author:

Leave A Comment

This website uses cookies and third party services. Ok