Hi! New AI Weekly is here! Enjoy your weekend reading AI news and don’t forget to share it with your friends 😉
Why Is the Human Brain So Efficient? – How massive parallelism lifts the brain’s performance above that of AI.
AI and Compute – OpenAI released an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.
How artificial intelligence is changing science – Artificial intelligence is now part of our daily lives, whether in voice recognition systems or route finding apps. But scientists are increasingly drawing on artificial intelligence to understand society, design new materials and even improve our health.
How the Enlightenment Ends – Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.
Finland offers free online Artificial Intelligence course to anyone, anywhere – Helsinki University hopes that one percent of the Finnish population – some 54,000 people – will take the online course this year. So far 24,000 have signed up.
Smart Compose: Using Neural Networks to Help Write Emails – Last week Google introduced Smart Compose, a new feature in Gmail that uses machine learning to interactively offer sentence completion suggestions as you type, allowing you to draft emails faster. Building upon technology developed for Smart Reply, Smart Compose offers a new way to help you compose messages — whether you are responding to an incoming email or drafting a new one from scratch.
Advances in Semantic Textual Similarity – The recent rapid progress of neural network-based natural language understanding research, especially on learning semantic text representations, can enable truly novel products such as Smart Compose and Talk to Books. It can also help improve performance on a variety of natural language tasks which have limited amounts of training data, such as building strong text classifiers from as few as 100 labeled examples. In this post authors discuss two papers reporting recent progress on semantic representation research at Google, as well as two new models available for download on TensorFlow Hub that they hope developers will use to build new applications.
Deep Learning with Ensembles of Neocortical Microcircuits – Dr. Blake Richards
TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
TensorFlow workshop notebooks from Google
Introducing state of the art text classification with universal language models – This post is a lay-person’s introduction to new paper, which shows how to classify documents automatically with both higher accuracy and less data requirements than previous approaches. Authors explain in simple terms: natural language processing; text classification; transfer learning; language modeling; and how their approach brings these ideas together.