# The Neural Nets are here!

Simon has started building neural networks in Python! For the moment, he has succeeded in making two working neural nets (a Perceptron and a Feed Forward neural net). He used the sigmoid activation function for both. The code partially derived from Siraj Raval’s “The Math of Intelligence” tutorials.

The FF was tougher to build:

Simon’s nets run locally (on our home pc), but he will need more computational power for the more complex future projects, so he signed up to this wonderful online resource called FloydHub! FloydHub is sort of a heroku for deep learning, a Platform-as-a-Service for training and deploying deep learning models in the cloud. It uses Amazon, which Simon could, too, but it would have been a lot more expensive and tedious work to set up.

Simon’s next step will be another supervised learning project, a Recurrent Neural Net that will generate text. He has already started building it and fed it one book to read! In this video he explains how character-based text generators work:

# Simon building a Perceptron in Processing

Simon has already built a Perceptron before, several months ago, while following along with Daniel Sgiffman’s Coding Train channel. This time around, he is writing his own code ad doing all the matrix calculations himself. He hasn’t finished programming this network yet, but it’s a good start:

Doing Matrices in Khan Academy’s Precalculus course:

# Simon’s own little neural network

This is one of Simon’s most enchanting and challenging projects so far: working on his own little AIs. As I’ve mentioned before, when it comes to discussing AI, Simon is both mesmerized and frightened.  He watches Daniel Shiffman’s neural networks tutorials twenty times in a row and practices his understanding of the mathematical concepts   underlying the code (linear regression and gradient descent) for hours. Last week, Simon built a perceptron of his own. It was based on Daniel Shiffman’s code, but Simon added his own colors and physics, and played around with the numbers and the bias. You can see Simon working on this project step by step in the six videos below.

His original plan was to build two neural networks that would be connected to each other and communicate, he has only built one perceptron so far.

# Multiple Perceptrons. XOR.

Simon explains how to use XOR in a simple neural network with multiple perceptrons. Based upon Daniel Shiffman’s live stream on neural networks number 98.

# Neural Networks Coding Challenge

Simon completes the Neural Networks Coding Challenge (in Processing, Java) that he had followed in the Intelligence and Learning Livestream last Friday. In the videos below he also talks about what neural networks are and tries to add a line object (something he had suggested in the live chat).

# Neural Networks. Perceptron and Preception Seeking in JavaScript

Simon has translated Examples 10.1 (Perceptron) and 10.2 (Perceptron Steering) from Daniel Shiffman’s Nature of Code book, Chapter 10 (Neural Networks) into JavaScript in Codepen.io. Both projects are available online, you can play with them and see the code if you go to Simon’s Neural Networks collection page at https://codepen.io/collection/DbBqJj/

Simon plans to place more JavaScript translations of neural networks examples from Daniel Shiffman’s book on the same page in the course of this week.

The first of the videos below shows Simon talking about his translation of the Perceptron. In the second video, he is showing the Perception Steering project, a combination of steering behaviours and neural network (the autonomous agent in the program get a “brain” with one “neuron” that allows him to seek the target closest to the moving circle).