Simon’s Decision Tree Library

Simon has just created a decision tree library, called “Decision”, that is helpful in building decision trees/forests (Machine Learning). He has also tried performing unit tests for the first time, and passed several of them! Once Simon’s library is in GitHub he also plans to link it to the testing hub CircleCI so that no merging can happen without passing tests. In this video, Simon explains what a decision tree is, shows his library and his test decision trees.

Simon’s library on GitHub (with a huge Readme that Simon wrote himself): https://github.com/simon-tiger/decision

Simon’s library on CircleCI: https://circleci.com/gh/simon-tiger/decision/3

Simon’s unit tests:

Screen Shot 2018-02-20 at 16.59.12

Advertisements

Magic around New Year’s Eve

This magical time of the year, Simon’s craziest, most daring dreams come true! First, his guru from the New York University Daniel Shiffman sends Simon his book and the words he writes there are the most beautiful words anyone has ever told him. Then, on the last day of the awesome year 2017, Simon’s other hero, the glamorous knight of AI Siraj Raval materialises in our living room, directly from YouTube! Happy New Year full of miracles and discoveries everyone!

DSC_3314

Daniel Shiffman’s book “The Nature of Code” that Simon had already largely read online and now also reads before bed. It also comforted him recently when he was in pain, he cuddled up of the sofa with this big friendly tome on his lap.

dsc_33401897498871.jpg

Daniel Shiffman signed the book for Simon:

dsc_33441647117253.jpg

dsc_33421879896746.jpg

Siraj Raval stepped out of the YouTube screen straight into our Antwerp apartment on December 31. Simon has been following Siraj’s channel for months, learning about the types of neural networks and the math behind machine learning. It is thanks to Siraj’s explanations that Simon has been able to build his first neural nets :

DSC_3370

DSC_3364

Schermafbeelding 2018-01-05 om 02.07.56

The Neural Nets are here!

Simon has started building neural networks in Python! For the moment, he has succeeded in making two working neural nets (a Perceptron and a Feed Forward neural net). He used the sigmoid activation function for both. The code partially derived from Siraj Raval’s “The Math of Intelligence” tutorials.

ML Perceptron 10 Dec 2017

The FF was tougher to build:

Simon’s nets run locally (on our home pc), but he will need more computational power for the more complex future projects, so he signed up to this wonderful online resource called FloydHub! FloydHub is sort of a heroku for deep learning, a Platform-as-a-Service for training and deploying deep learning models in the cloud. It uses Amazon, which Simon could, too, but it would have been a lot more expensive and tedious work to set up.

Simon’s next step will be another supervised learning project, a Recurrent Neural Net that will generate text. He has already started building it and fed it one book to read! In this video he explains how character-based text generators work:

Simon explains K Means Clustering

Simon has prepared this implementation of “K-Means Clustering” in Processing as a gift for Daniel Shiffman, who is plainning to talk about this Machine Learning model in one of his upcoming live sessions on the Coding Train channel.

Simon writes: K-Means Clustering is a type of Machine Learning Model. It’s for “Unsupervised Learning” (meaning you have data with no labels).

Link to Simon’s code on GitHub: https://github.com/simon-tiger/k-means-clustering

Link to pseudocode by Siraj Raval: https://www.youtube.com/watch?v=9991JlKnFmk&spfreload=1

 

 

Simon working on a neural networks paper

Simon was working on a neural networks paper in Jupyter Notebook on Friday evening, but didn’t finish it because the Coding Train live stream started. He says he can no longer continue without having to do too much copy-pasting from this version into a new one, as his in-your-browser time expired, so I’m posting some screen shots of the unfinished paper below. This is the way Simon teaches himself: he follows lectures and tutorials online and then goes ahead to writing his own “textbook”or recording his own “lecture”. Much of the knowledge he acquires on neural networks these days comes from Siraj Raval’s YouTube series “The Math of Intelligence”.

 

Neural Networks Paper Jupyter 2017-11-20 1

Neural Networks Paper Jupyter 2017-11-20 2

Neural Networks Paper Jupyter 2017-11-20 3

Neural Networks Paper Jupyter 2017-11-20 4

Simon’s bedtime lectures on neural networks

 

 

There’s a part 3 coming!

DSC_2862

DSC_2863

“Mom, my ClickCharts trial period expired, so I found this Virtual Paradigm Enterprise!” (Simon independently searches for free options to make beautiful diagrams online).

Here a diagram of an LSTM neural network:

LSTM Cell in Virtual Paradigm Enterprise 19 Nov 2017 2

And an RNN:

RNN Cell in Virtual Paradigm Enterprise 19 Nov 2017

Simon’s own little neural network

Connected Perceptrons in Processing 26 Jul 2017

This is one of Simon’s most enchanting and challenging projects so far: working on his own little AIs. As I’ve mentioned before, when it comes to discussing AI, Simon is both mesmerized and frightened.  He watches Daniel Shiffman’s neural networks tutorials twenty times in a row and practices his understanding of the mathematical concepts   underlying the code (linear regression and gradient descent) for hours. Last week, Simon built a perceptron of his own. It was based on Daniel Shiffman’s code, but Simon added his own colors and physics, and played around with the numbers and the bias. You can see Simon working on this project step by step in the six videos below.

His original plan was to build two neural networks that would be connected to each other and communicate, he has only built one perceptron so far.

 

 

 

 

 

 

Simon gets serious with Linear Regression (Machine Learning)

Simon has been working on a very complicated topic for the past couple of days: Linear Regression. In essence, it is the math behind machine learning.

Simon was watching Daniel Shiffman’s tutorials on Linear Regression that form session 3 of his Spring 2017 ITP “Intelligence and Learning” course (ITP stands for Interactive Telecommunications Program and is a graduate programme at NYU’s Tisch School of the Arts).

Daniel Shiffman’s current weekly live streams are also largely devoted to neural networks, so in a way, Simon has been preoccupied with related stuff for weeks now. This time around, however, he decided to make his own versions of Daniel Shiffman’s lectures (a whole Linear Regression playlist), has been busy with in-camera editing, and has written a resume of one of the Linear Regression tutorials (he actually sat there transcribing what Daniel said) in the form of an interactive webpage! This Linear Regression webpage is online at: https://simon-tiger.github.io/linear-regression/ and the Gragient Descent addendum Simon made later is at:  https://simon-tiger.github.io/linear-regression/gradient_descent/interactive/ and https://simon-tiger.github.io/linear-regression/gradient_descent/random/

And here come the videos from Simon’s Liner Regression playlist, the first one being an older video you may have already seen:

Here Simon shows his interactive Linear Regression webpage:

A lecture of Anscombe’s Quartet (something from statistics):

Then comes a lecture on Scatter Plot and Residual Plot, as well as combining Residual Plot with Anscombe’s Quartet, based upon video 3.3 of Intelligence and Learning. Simon made a mistake graphing he residual plot but corrected himself in an addendum (end of the video):

Polynomial Regression:

And finally, Linear Regression with Gradient Descent algorithm and how the learning works. Based upon Daniel Shiffman’s tutorial 3.4 on Intelligence and Learning:

DSC_0557

 

 

Simon explains Linear Regression (Machine Learning)

In the two videos below Simon writes a JavaScript program using Linear Regression in Atom and gives a whiteboard lecture on the Linear Regression algorithm, both following a tutorial on Linear Regression by Daniel Shiffman.

Simon made a mistake in the formula using the sigma operator. He corrected it later. It should be i=1 (not i=0).

DSC_0548

DSC_0549

DSC_0551