Simon praying the God of Math

Activation functions used in machine learning:

DSC_2981

DSC_2982

DSC_2983

Advertisements

DAE neural net and Keras

Simon told me today he was ready to start building his own DAE (Denoising Auto Encoder) neural network. He said he would be using a documentation page about a machine learning library called Keras at blog.keras.io/building-autoencoders-in-keras.html. He found this documentation page completely on his own, by searching the web and digging into Python fora. I just watch him google something like “How can I install Keras 1.0?” and find GitHub discussions on the subject that guide him along. Or I see him type “How to install Python on Windows?” and follow the instructions at How-to-Geek. Eventually, he came up with a list of steps that he needed to complete in order to be able to install Keras (installing Python 3 (looking up why it should be 3 and not 2), installing PIP, installing Tensor Flow, etc). It didn’t work on a mac laptop, so he tried everything on a pc and it worked! Two steps required that he used terminal. It was amazing to see him plan ahead, search and implement (notoriously difficult) steps completely independently.

He has started working on the DAE tonight. Working on a node package that makes `manifest.json` files (for Chrome extensions) at the same time, so not sure he’ll finish soon. “Mom, I’ve got so many things to do! There are so many thoughts in my head!”

DSC_2911

Simon working on a neural networks paper

Simon was working on a neural networks paper in Jupyter Notebook on Friday evening, but didn’t finish it because the Coding Train live stream started. He says he can no longer continue without having to do too much copy-pasting from this version into a new one, as his in-your-browser time expired, so I’m posting some screen shots of the unfinished paper below. This is the way Simon teaches himself: he follows lectures and tutorials online and then goes ahead to writing his own “textbook”or recording his own “lecture”. Much of the knowledge he acquires on neural networks these days comes from Siraj Raval’s YouTube series “The Math of Intelligence”.

 

Neural Networks Paper Jupyter 2017-11-20 1

Neural Networks Paper Jupyter 2017-11-20 2

Neural Networks Paper Jupyter 2017-11-20 3

Neural Networks Paper Jupyter 2017-11-20 4

Simon building a Perceptron in Processing

Simon has already built a Perceptron before, several months ago, while following along with Daniel Sgiffman’s Coding Train channel. This time around, he is writing his own code ad doing all the matrix calculations himself. He hasn’t finished programming this network yet, but it’s a good start:

 

DSC_2874

Doing Matrices in Khan Academy’s Precalculus course:

Schermafbeelding 2017-11-16 om 12.34.36

Simon’s bedtime lectures on neural networks

 

 

There’s a part 3 coming!

DSC_2862

DSC_2863

“Mom, my ClickCharts trial period expired, so I found this Virtual Paradigm Enterprise!” (Simon independently searches for free options to make beautiful diagrams online).

Here a diagram of an LSTM neural network:

LSTM Cell in Virtual Paradigm Enterprise 19 Nov 2017 2

And an RNN:

RNN Cell in Virtual Paradigm Enterprise 19 Nov 2017

Just another day in graphs

Simon loves looking at things geometrically. Even when solving word problems, he tends to see them as a graph. And naturally, since he started doing more math related to machine learning, graphs have occupied an even larger portion of his brain! Below are his notes in Microsoft Paint today (from memory):

Slope of Line:

Slope of Line 15 November 2017

Steepness of Curve:

Steepness of Curve 15 November 2017

An awesome calculator Simon discovered online at desmos.com/calculator that allows you to make mobile and static graphs:

Desmos.com Polynomial 15 Nov 2017

Desmos.com Polynomial 15 Nov 2017 1

Yesterday’s notes on the chi function (something he learned through 3Blue1Brown‘s videos on Taylor polynomials):

DSC_2858

Simon following The Math of Intelligence course by Siraj Raval:

DSC_2843

DSC_2840

Simon explains the I, Robot project. How do synthetic literature neural nets work?

 

Today is a big day as – for the first time in human history – a short story has been published that was written by a robot together with a human. And the bot (called AsiBot, because it writes in the style of Isaac Asimov’s I, Robot) was developed in Dutch (!) in Amsterdam (at Meertens Institute) and in Antwerp (at the Antwerp Centre for Digital Humanities and Literary Criticim), Simon’s two home cities.

The story written by the AsiBot and Dutch bestselling author Ronald Giphart forms a new, 10th chapter in Isaac Asimov’s classic I, Robot (that originally contained only 9 chapters). The AsiBot was fed 10 thousand books in Dutch to master the literary language and can already produce a couple of paragraphs on its own, but a longer coherent story remains out of fetch. This is where a human writer, Ronald Giphart stepped in. It was he who decided which of the sentences written by AsiBot stayed and which should be thrown out. The reader doesn’t know which sentences are written (or edited) by the human writer and which are pure robot literature. Starting from November 6 anyone (speaking Dutch) can try writing with AsiBot on www.asibot.nl.

Simon was very excited about this news and recorded a short video where he explains how such “synthetic literature”neural nets work (based on what he learned from Siraj Raval’s awesome YouTube classes):

My phone froze so we had to make the second part as a separate video:

Introducing Siraj Raval

Simon has been watching a lot of Siraj Raval’s videos on neural networks lately, brushing up his Python syntax and derivatives. He has even been trying the great Jupyter editor, where one can build one’s own neural network and install libraries with pretrained networks https://try.jupyter.org/

Just like with Danel Shiffman’s videos, the remarkable thing about Siraj’s (very challenging) courses is that they also touch upon so many subjects outside programming (like art and music and stock exchange) and are compiled with a sublime sense of humour.

dsc_2125964025814.jpg

 

Simon’s own little neural network

Connected Perceptrons in Processing 26 Jul 2017

This is one of Simon’s most enchanting and challenging projects so far: working on his own little AIs. As I’ve mentioned before, when it comes to discussing AI, Simon is both mesmerized and frightened.  He watches Daniel Shiffman’s neural networks tutorials twenty times in a row and practices his understanding of the mathematical concepts   underlying the code (linear regression and gradient descent) for hours. Last week, Simon built a perceptron of his own. It was based on Daniel Shiffman’s code, but Simon added his own colors and physics, and played around with the numbers and the bias. You can see Simon working on this project step by step in the six videos below.

His original plan was to build two neural networks that would be connected to each other and communicate, he has only built one perceptron so far.

 

 

 

 

 

 

Simon testing Codota

In the videos below, Simon is building a Codota demo in Java. Codota is an AI programming assistant that is looking for solutions on GitHub and other global resources and suggests them in real time, recognizing your code. At the moment, it’s only available for Java and only for three editors (here – Eclipse), so the use is very limited, but their website says that other languages will follow soon. Since Simon normally uses Processing for Java, he can’t really use Codota for most of his projects. It has been an interesting exercise though (and I was surprised at how skillful he is at writing Java in Eclipse, which is quite different from Processing), and a glimpse into the future. There’s no doubt assistants such as Codota will very soon become a common companion. Simon had Codota resolve one error for him and was very happy about that. He said Codota was his friend. He was reluctant to turn its speech functions on, however. Simon has this slight fear of full blown AI and a fascination, wanting to learn how it works, at the same time.