More Numberphile-inspired stuff! 

More Numberphile-inspired stuff! Simon has been studying Mersenne Primes (2^n – 1) and their relation to perfect numbers via the Numberphile channel and heard Matt Parker say no one has proved that there are no odd perfect numbers (that perfect numbers are always even). In the video below, Simon tries to prove why all perfect numbers are even. Here is Simon’s proof: When calculating the factors of a perfect number you start at 1 and you keep doubling, but when you reach one above a Mersenne prime, you switch to the Mersenne prime, and then keep doubling again. Once you double 1, you get 2, so 2 is ALWAYS a factor of any perfect number, which makes them all even (by definition, an even number is one divisible by 2):

 

dsc_3541483407704.jpg

More topics Simon learned about from the Numberphile channel included:

The Stern-Brochot Sequence:

Stern-Brochot Sequence 16 Jan 2018

Prime Factors:

Prime Factors 16 Jan 2018

Checking Mersenne Primes using the Lucas-Lehmer Sequence. Simon’s destop could only calculate this far:

Checking Mersenne Primes 16 Jan 2018

The 10958 problem. Natural numbers from 0 to 11111 are written in terms of 1 to 9 in two different ways. The first one in increasing order of 1
to 9, and the second one in decreasing order. This is done by using the operations of addition, multiplication, subtraction, potentiation,
and division. In both the situations there are no missing numbers, except one, i.e., 10958 in the increasing case (Source). The foto below comes from the source paper, not typed by Simon, but is something he studied carefully:

10958 Problem 17 Jan 2018

Simon’s notes on the 10958 problem:

dsc_3552884572610.jpg

The Magic Square (adding up the numbers on the sides, diagonals or corners always results in the number you picked; works for numbers between 21 and 65):

dsc_3526717765599.jpg

dsc_35251678381008.jpg

Simon also got his little sis interested in the Magic Suare:

dsc_35351933176286.jpg

dsc_3539119604192.jpg

dsc_35441164956673.jpg

And, of course, the Square-Sum problem, that we’ve already talked about in the previous post.

dsc_3500164521074.jpg

Simon’s 3D version of the Square-Sum problem:

Square-Sum Problem 3D 17 Jan 2018

Advertisements

The Square-Sum Problem

Simon has become a full-blown Numberphile fan over the past couple of days. He had already watched two Matt Parker videos before, but it’s this week that he got seriously hooked on the channel, and it all started from the Square-Sum Problem video!

Simon recorded and edited two videos of his own (in OBS) trying to solve the Square-Sum Problem, manually and using JavaScript code:

 

 

Simon’s Fibonacci function and Fibonacci counter in p5.js

Simon came up with this Fibonacci function while taking a walk downtown:

Schermafbeelding 2017-12-23 om 02.41.53

f(0) = 0

f(1) = 1

f(n) = f(n-1)+f(n-2)

When we got home, he used the function to build a Fibonacci counter in p5.js:

You can play with Simon’s Fibonacci counter online at: https://alpha.editor.p5js.org/simontiger/sketches/Skhr3o8Gf

The idea about the Fibonacci function struck Simon when he was looking down at the cobbles under his feet. “Look, Mom! It’s a golden rectangle!”, he shouted:

DSC_3157

He had read that golden ratio has a direct connection to the Fibonacci sequence. The same evening, he took out his compasses to draw a golden rectangle (this time not his own invention, but following the steps from his Murderous Math book):

DSC_3168

DSC_3169

If you turn the page, the smaller rectangle is a golden rectangle as well, and if you slice a square off of it, the remaining rectangle will also have the golden proportions. You can continue doing this infinitely. The sizes of the rectangles will exactly correspond to the numbers in the Fibonacci sequence, which makes these drawings an illustration to the sequence.

DSC_3170

The next day, Simon showed his function to his math teacher. Below are the Fibonacci sequence numbers he got through his selfmade JavaScript program. After a certain number, the computer started taking too long to compute the following number in the sequence (several seconds per number), but didn’t crash.

DSC_3164

Live Stream #4 on December 14. Living Code > Vectors.

Simon debuted with his own coding course last week! The course is called “Living Code” and Simon has already planned all its sessions for the year ahead. He is going to teach the course as part of his live streams (once in two weeks on Thursday evenings at 17 CET), although not every live stream will include Living Code sessions.

In the first Living Code session (that was live on December 14), Simon introduced the course and its Chapter 1: Vectors. He talked about mathematical operations with vectors, their magnitude, acceleration and normalisation.

There were no crashes or technical issues this time and Simon remained amazingly focused and organised during the stream, he had actually created a schedule for himself beforehand (without any incentive on my behalf):

Timestamps for this live session (new):
0:00:00 – 0:05:00: Announcements
0:05:00 – 0:15:00: Starting Question & Answer
0:15:00 – 0:20:00: Intro to Living Code
0:20:00 – 0:30:00: What is a Vector?
0:30:00 – 0:40:00: Vector math: subtract, multiply, divide
0:40:00 – 0:50:00: Vector math: magnitude, heading, normalize
0:50:00 – 1:00:00: Physics: acceleration, (maybe) jerk
1:00:00 – 1:10:00: Ending Question & Answer

Please subscribe to our YouTube channel  and you won’t miss a video or a live session!

Below is the archived version of Simon’s first Living Code course lessons:

The Neural Nets are here!

Simon has started building neural networks in Python! For the moment, he has succeeded in making two working neural nets (a Perceptron and a Feed Forward neural net). He used the sigmoid activation function for both. The code partially derived from Siraj Raval’s “The Math of Intelligence” tutorials.

ML Perceptron 10 Dec 2017

The FF was tougher to build:

Simon’s nets run locally (on our home pc), but he will need more computational power for the more complex future projects, so he signed up to this wonderful online resource called FloydHub! FloydHub is sort of a heroku for deep learning, a Platform-as-a-Service for training and deploying deep learning models in the cloud. It uses Amazon, which Simon could, too, but it would have been a lot more expensive and tedious work to set up.

Simon’s next step will be another supervised learning project, a Recurrent Neural Net that will generate text. He has already started building it and fed it one book to read! In this video he explains how character-based text generators work:

Cellular Automata

Simon’s new take on cellular automata:

Wolfram CA 9 Dec 2017

Some results would make fancy knitting patters!

In case you wonder, what on earth are cellular automata:

A cellular automaton (pl. cellular automata, abbrev. CA) is studied in computer science, mathematics, physics, theoretical biology and microstructure modeling.

A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off . The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood is defined relative to the specified cell. An initial state (time t = 0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule (generally, a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood.

The concept was originally discovered in the 1940s by Stanislaw Ulam and John von Neumann while they were contemporaries at Los Alamos National Laboratory. While studied by some throughout the 1950s and 1960s, it was not until the 1970s and Conway’s Game of Life, a two-dimensional cellular automaton, that interest in the subject expanded beyond academia. In the 1980s, Stephen Wolfram engaged in a systematic study of one-dimensional cellular automata, or what he calls elementary cellular automata; his research assistant Matthew Cook showed that one of these rules is Turing-complete. Wolfram published A New Kind of Science in 2002, claiming that cellular automata have applications in many fields of science. These include computer processors and cryptography. (Wikipedia)

Shortest path finder in Processing

Simon created a wonderful project in Processing – a path finder that looks for the shortest path to reach the green cell, avoiding the obstacles. Every time the path finder fails, it tries again. Simon also built a counter for the number of tries. The next step is turning the grid into a game environment and training the path finder, Simon says, i.e. applying reinforcement learning. Simon thinks he should use the Q function in this stochastic environment, but is still very timid about its implementation. In the third video below, he explains how this project could help him take his first cautious steps in the direction of machine learning.