This blog is about Simon, a young gifted mathematician and programmer, who had to move from Amsterdam to Antwerp to be able to study at the level that fits his talent, i.e. homeschool. Visit https://simontiger.com

Simon’s latest independent coding project involved some biology lessons! He loves the channel Primer by Justin Helps and watched his evolution series many times, studying the rules for species’ survival and multiplication. This resulted in two interactive evolution simulations, in both of which Simon recreated the rules he learned. The first simulation doesn’t involve natural selection and is based on these two videos: Simulating Competition and Logistic Growth and Mutations and the First Replicators.

Simon has programmed this game of Tic-Tac-Tic-Tac-Toe-Toe Game in p5.js from scratch. He and his sister have had hours of fun playing it (and she turned out to be better at this strategic game):

Every live session Daniel Shiffman mentions Simon several times, usually because Simon gives good feedback/ advice. On the other end, Simon is invigorated and jumping about the room. Sometimes resulting in serious bumps against the furniture.

This has been one of Simon’s most ambitious (successful) projects so far and a beautiful grand finale of 2019, also marking his channel reaching 1K subscribers. The project – approximating Euler’s number (e) in a very weird way – is based upon a Putnam exam puzzle that Simon managed to prove:

The main part of the project was inspired by 3Blue1Brown Grant Sanderson’s guest appearance on Numberphile called Darts in Higher Dimensions, showing how one’s probable score would end up being e to the power of pi/4. Simon automated the game and used the visualization to approximate e. Below is the main video Approximating pi and e with Randomness. You can run the project online at: https://editor.p5js.org/simontiger/present/fNl0aoDtW

Simon saw a prototype of this Galton Board in a video about maths toys (it works similarly to a sand timer in a see-through container). He created his digital simulation using p5.js online editor, free for everyone to enjoy:

I’ve been terrible at keeping this blog up to date. One of Simon’s best project in December 2019 was creating a chess robot and I haven’t even shared it here.

We were joking how this is Simon’s baby and her name is Chessy. Simon also made an improved version with a drop-down menu allowing to choose 1 to 5 steps ahead difficulty level (warning: levels 4 and 5 may run quite slowly): https://chess-ai-user-friendly–simontiger.repl.co/

This amazing sentence is generated by a Markov Text-Generation Algorithm. What is a Markov Algorithm? Simply put, it generates the rules from a source text, and it generates a new text that also follows those rules. The rules are often called the Markov Blanket, and the new text is also called the Markov Chain. OK, how does this all work?

Let’s take an example: let’s consider the source text to be “Hello, world!”. Then we pick a number called the order. The higher the number, the more sense the text makes. We’ll pick 1 for the first examples, we’ll examine what happens with higher numbers later.

Then we generate the Markov Blanket. This is a deterministic process. We start from the beginning: “H”. So we put H in our Markov Blanket. Then we come across “e”. So we put e in our Markov Blanket, but to specify that it’s next from H, we connect H to e by an arrow. Then we stumble on “l”. So we put l in our Markov Blanket, but again, to specify that it’s next from e, we connect e to l by an arrow.

Now, here’s where it gets interesting. What’s next? Well, it’s “l” again. So now we connect l to itself, by an arrow. This is interesting because it’s no longer a straight line!

And we keep going. Once we’re done, our Markov Blanket will look something like this:

Once we’ve created our Markov Blanket, then we start generating the Markov Chain from it. Unlike the Markov Blanket, generating the Markov Chain is a stochastic process.

This is just a process of wandering about the Markov Blanket, and noting down where we go. One way to do this, is just to start from the beginning, and follow the path. And whenever we come across some sort of fork, we just spin a wheel to see where we go next. For example, here are some possible Markov Chains:

That was an easy one, so let’s do something more complex with code.

First, just an interface to enter in the text, and the order:

text = "" # Variable to hold the text
print("Type your text here (type END to end it):")
while True:
line = input("") # Read a line of text from standard input
if line != "END": text += line + "\n" # If we didn't enter END, add that line to the text
else: break # If we entered END, signify that the text has ended
text = text[:len(text)-1] # Remove the last line break
order = int(input('Type the order (how much it makes sense) here: '))
input("Generate me a beautiful text") # Just to make it dramatic, print this message, and ask the user to hit ENTER to proceed

Next, the Markov Blanket. Here, we store it in a dictionary, and store every possible next letter in a list:

def markov_blanket(text, order):
result = {} # The Markov Blanket
for i in range(len(text) - order + 1): # For every n-gram
ngram = ""
for off in range(order):
ngram += text[i+off]
if not ngram in result: # If we didn't see it yet
result[ngram] = []
if i < len(text) - order: # If we didn't reach the end
result[ngram].append(text[i+order]) # Add the next letter as a possibility
return result # Give the result back

Huh? What is this code?

This is what happens when we pick a number >1. Then, instead of making the Markov Blanket for every character, you make it for every couple of characters.

For example, if we pick 2, then we make the Markov Blanket for the 1st and 2nd letter, the 2nd and 3rd, the 3rd and 4th, the 4th and 5th, and so on. When we generate the Markov Chain, we squash the ngrams that we visit together. So next, the Markov Chain:

def markov_chain(blanket):
keys = blanket.keys()
ngram = random.choice(list(keys)) # Starting Point
new_text = ngram
while True:
try:
nxt = random.choice(blanket[ngram]) # Choose a next letter
new_text += nxt # Add it to the text
ngram += nxt # Add it to the ngram and remove the 1st character
ngram = ngram[1:]
except IndexError: # If we can't choose a next letter, maybe because there is none
break
return new_text # Give the result back
# Now that we know how to do the whole thing, do the whole thing!
new_text = markov_chain(markov_blanket(text, order), num)
print(new_text) # Print the new text out

OK, now let’s run this:

Type your text here (type END to end it):
A rainbow is a meteorological phenomenon that is caused by reflection, refraction and dispersion of light in water droplets resulting in a spectrum of light appearing in the sky. It takes the form of a multicoloured circular arc. Rainbows caused by sunlight always appear in the section of sky directly opposite the sun.
Rainbows can be full circles. However, the observer normally sees only an arc formed by illuminated droplets above the ground, and centered on a line from the sun to the observer's eye.
In a primary rainbow, the arc shows red on the outer part and violet on the inner side. This rainbow is caused by light being refracted when entering a droplet of water, then reflected inside on the back of the droplet and refracted again when leaving it.
In a double rainbow, a second arc is seen outside the primary arc, and has the order of its colours reversed, with red on the inner side of the arc. This is caused by the light being reflectedtwice on the inside of the droplet before leaving it.
END
Type the order (how much it makes sense) here: 5
Generate me a beautiful text

And……..it..stops.

Why did it do that?

You see, this is not such a good method. What if our program generated a Markov Blanket that didn’t have an end? Well, our program wouldn’t even get to the end, and it would just wander around and around and around, and never give us a result! Or even if it did, it would be infinite!

So how do we avoid this?

Well, we set another much bigger number , let’s say 5000, to be a callout value. If we don’t get to the end within 5000 steps, we give up and output early. Let’s run this again…

And now, it doesn’t stop anymore! Snippets of example generated text:

It takes the sun to the ground, and violet on the observer’s eye.

This rainbow, a second arc formed by illuminated droplets resulting it. In a primary rainbow is a meteorological phenomenon the back of the ground, and has the sky. It takes the order of its coloured circles. However, the sun.

Rainbow, a second arc shows red on a line from the section of light in water droplet and has the sun.

In a double rainbow is caused by illuminated droplet on the outer part and refracted when leaving in a spectrum of a multicoloured circles. However, the droplet of water droplets resulting it. In a double rainbow is a meteorological phenomenon the droplets resulting in a spectrum of a multicoloured circular arc. Rainbow is caused by the inner side the observer’s eye