Coding, Community Projects, Contributing, Experiments, JavaScript, live stream, Machine Learning, Milestones, Physics, Simon's Own Code

Simon’s Random Number Generator

This one’s back from mid-October, forgot to post here.

Simon created a random number generator that generates a frequency, and then picks it back up. Then, it calculates the error between the generated frequency and the picked up frequency. This is one of my community contributions for a Coding Train challenge: https://thecodingtrain.com/CodingChallenges/151-ukulele-tuner.html

Link to project: https://editor.p5js.org/simontiger/sketches/eOXdkP7tz
Link to the random number plots: https://www.wolframcloud.com/env/monajune0/ukalele%20tuner%20generated%20random%20number%20analysis.nb
Link to Daniel Shiffman’s live stream featured at the beginning of this vid: https://youtu.be/jKHgVdyC55M

plot of the random numbers generated by Simon’s ukulele tuner random number generator (plotted in Wolfram Mathematica)
Coding, In the Media, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Set the beautiful mind free, Simon's Own Code

Interview with Simon on Repl.it

Repl.it has published a cool interview with Simon! It was interesting how Simon struggling to answer some of the more general questions gave me another glimpse into his beautiful mind that doesn’t tolerate crude dimensionality reductions. The first question, “If you could sum yourself up in one sentence, how would you do it?” really upset him, because he said he just couldn’t figure out a way to sum himself up in one sentence. This is precisely the same reason why Simon has had trouble performing trivial oral English exam tasks, like picking some items from the list and saying why he liked or disliked them. The way he sees the world, some things are simply unfathomable, or in any case, extremely complex, too complex to imagine one can sum them up in one sentence or come up with the chain of causes and consequences of liking something on the spot. He often tells me he sees the patterns, the details. Seeing objects or events in such complexity may mean it feels inappropriate, irresponsible, plain wrong to Simon to reduce those objects and events to a short string of characters.

This made me reflect upon how Simon keeps shaking me awake. I used to find nothing wrong with playing the reductionist game and frankly, had I been asked to sum myself up in one sentence, I would have readily come up with something like “a Russian journalist and a home educator”. It’s thanks to Simon that I am waking up to see how inaccurate that is. I begin to see how many games that we play in our society are forcing us to zoom out too far, to generalize too much. How often don’t we just plug something in, pretending we can answer impossible questions about the hugely complicated world around us and inside us! Well, Simon often honestly tells me that he just doesn’t have the answer.

For that first question in the interview, I suggested Simon answer something like “it’s more difficult to sum myself up in one sentence than to prove that e is irrational”, to which he replied: “But Mom, to prove that e is irrational is easy! It’s hard to prove that Pi is irrational!”

I must add that at the same time, Simon has really enjoyed the fact that Repl.it has written a developer spotlight about him as well as the social interaction on Twitter that the piece has initiated. It gave him a tangible sensation of belonging to the programming community, of being accepted and appreciated, and inspired him to work on his new projects in Repl.it.

Coding, JavaScript, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Set the beautiful mind free, Simon teaching, Simon's Own Code, Simon's sketch book

What Simon did instead of taking the SAT on Saturday

On Saturday morning, Simon didn’t go to the SAT examination location, although we had registered him to try taking the SAT (with great difficulties, because he is so young). In the course of the past few weeks, after trying a couple of practice SAT tests on the Khan Academy website, we have discovered that the test doesn’t reveal the depth of Simon’s mathematical talent (the tasks don’t touch the fields he is mostly busy with, like trigonometry, topology or calculus and require that instead, he solves much more primitive problems in a strictly timed fashion, while Simon prefers taking time to explore more complex projects). This is what happens with most standardized tests: Simon does have the knowledge but not the speed (because he hasn’t been training these narrow skills for hours on end as his older peers at school have). Nor does he have the desire to play the game (get that grade, guess the answers he deosn’t know), he doesn’t see the point. What did he do instead on his Saturday? He had a good night sleep (instead of having to show up at the remote SAT location at 8 a.m.) and then he…

built an A.I. applying a genetic algorithm, a neural network controlling cars moving on a highway! The cars use rays to avoid the walls of the highway. Implementing neuroevolution. What better illustration does one need to juxtapose true achievement and what today’s school system often wants us to view as achievemnt – getting a high grade on a test? The former is a beautiful page from Simon’s portfolio, showing what he really genuinely can do, a real life skill, something he is passionately motivated to explore deeper, without seeking a reward, his altruist contribution to the world, if you will. The latter says no more than how well one has been trained to apply certain strategies, in a competitive setting.

Simon’s code is online: https://repl.it/@simontiger/Raytracing-AI

Simon has put this version on GitHub: https://github.com/simon-tiger/Raycasting-A.I.

He has also created an improved version with an improved fitness function. “In the improved version, there’s a feature that only shows the best car (and you can toggle that feature on and off). And most importantly, I am now casting relative to where it’s going (so the linearity is gone, but it jiggles a lot, so I might linear interpolate it)”, – Simon explains. You can play with the improved version here: https://repl.it/@simontiger/Raycasting-AI-Improved

Finally, Simon is currently working on a version that combines all the three versions: the original, the improved and the version with relative directions (work in progress): https://repl.it/@simontiger/Raytracing-AI-Full

“I am eventually going to make a version of this using TensorFlow.js because with the toy library I’m using now it’s surprisingly linear. I’m going to put more hidden layers in the network”.

The raytracing part of the code largely comes from Daniel Shiffman.

Simon’s two other videos about this project, that was fully completed in one day:

Part 1
Part 2


Coding, Logic, Machine Learning, Milestones, Murderous Maths, Notes on everyday life, Simon's Own Code

Domain Coloring with Complex Functions in Wolfram Mathematica

Simon has been completely carried away by Wolfram Mathematica. He keeps starting new projects, just to try something out. After working on his Knot Theory book for days, and making beautiful illustrations in Wolfram, he switched over to domain coloring. The images below are some impressions of his experimenting with the color function. He hasn’t applied the complex function yet.

Another new project he started has been Poisson disc sampling.

“Wolfram is the most advanced language! It has most built-in stuff in it! At Wolfram, they are working so hard, that the knowledge base is changing every second!” Simon screams out as he pauses the Elementary Introduction to Wolfram Language book (he was reading it at first and now binge watches it as a series of video tutorials). Simon has been especially blown away by the free-form linguistic input: “From plane English it somehow computes the results and maybe even native Mathematica Syntax!” Wolfram also “has an entire section which is machine learning!”

Simon has started a free trial about a week ago, but I guess we’re buying a subscription.

Starting out
Only brightness left to do
Color function done
Coding, Coding Everywhere, Group, Machine Learning, Milestones, Murderous Maths, Notes on everyday life, Physics

Fluid Dynamics: Laughing and Crying

Simon was watching Daniel Shiffman’s live coding lesson on Wednesday, and when fluid dynamics and Navier-Stokes equations came up (describing the motion of fluid in substances and used to model currents and flow), Simon remarked in the live chat that the Navier–Stokes equations are actually one of the seven most important unsolved math problems and one can get a million dollar prize for solving them, awarded by the Clay Mathematics Institute.

(I looked this up on Wikipedia and saw that it has not yet been proven whether solutions always exist in 3D and, if they do exist, whether they are “smooth” or infinitely differentiable at all points in the domain).

We had read an in-depth history of the Navier–Stokes equations in Ian Stewart’s book several weeks ago, but I must confess I didn’t remember much of what we’d read anymore. “Is it that chapter where Stewart describes how Fourier’s paper got rejected by the French Academy of Sciences because his proof wasn’t rigid enough?” I asked Simon. – “No, Mom, don’t you remember? That was Chapter 9 about Fourier Transform! And the Navier-Stokes equations was Chapter 10!” – “Oh, and the Fourier Transform was also the one where there was a lot about the violin string, right?” – “No!”, – Simon really laughs at me by now, – “That was in Chapter 8, about the Wave Function! You keep being one chapter behind in everything you say!” Simon honestly finds it hilarious how I can’t seem to retain the information about all of these equations after reading it once. I love his laugh, even when he’s laughing at me.

Today though, he was weeping inconsolably and there was nothing I could do. Daniel Shiffman had to cancel the live session about CFD, computer fluid dynamics. Simon had been waiting impatiently for this stream. My guess, because it’s his favourite teacher talking about something interesting from a purely mathematical view, a cocktail of all things he enjoys most. And because he never seems to be able to postpone the joy of learning. He had explained to me once that if he has this drive inside of him to conduct a certain experiment or watch a certain tutorial now, he simply can’t wait, because later he doesn’t seem to get the same kick out of it anymore.

I’m baking Simon’s favourite apple pie to pep him up. Here are a couple more screen shots of him taking part in the Wednesday lesson:

Coding, Community Projects, Contributing, JavaScript, live stream, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Set the beautiful mind free, Simon teaching, Trips

Simon took part in a Coding Train livestream in Paris!

Simon and Daniel Shiffman after the livestream

The video below is part of Daniel Shiffman’s livestream hosted by GROW Le Tank in Paris on 6 January 2019 about KNN, machine learning, transfer learning and image recognition. Daniel kindly allowed Simon to take the stage for a few minutes to make a point about image compression (the algorithm that Daniel used was sort of a compression algorithm):

Here is a different recording (in two parts) of the same moment from a different angle:


Coding, Coding Everywhere, Community Projects, Contributing, Geometry Joys, html, Java, JavaScript, Lingua franca, live stream, Living Code, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Physics, Simon makes gamez, Simon teaching, Simon's Own Code, Simon's sketch book, Together with sis

Simon and Daniel Shiffman

Today is one of the most beautiful days in Simon’s life: NYU Associate Professor and the creator of Coding Train Daniel Shiffman has been Simon’s guarding angel, role model and source of all the knowledge Simon has accumulated so far (in programming, math, community ethics and English), and today Simon got to meet him for the first time in real life!

Daniel Shiffman posted:

Coding, Community Projects, Contributing, JavaScript, Machine Learning, Milestones, Murderous Maths

Simon’s Decision Tree Library

Simon has just created a decision tree library, called “Decision”, that is helpful in building decision trees/forests (Machine Learning). He has also tried performing unit tests for the first time, and passed several of them! Once Simon’s library is in GitHub he also plans to link it to the testing hub CircleCI so that no merging can happen without passing tests. In this video, Simon explains what a decision tree is, shows his library and his test decision trees.

Simon’s library on GitHub (with a huge Readme that Simon wrote himself): https://github.com/simon-tiger/decision

Simon’s library on CircleCI: https://circleci.com/gh/simon-tiger/decision/3

Simon’s unit tests:

Screen Shot 2018-02-20 at 16.59.12

Coding, JavaScript, Lingua franca, live stream, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Python

Magic around New Year’s Eve

This magical time of the year, Simon’s craziest, most daring dreams come true! First, his guru from the New York University Daniel Shiffman sends Simon his book and the words he writes there are the most beautiful words anyone has ever told him. Then, on the last day of the awesome year 2017, Simon’s other hero, the glamorous knight of AI Siraj Raval materialises in our living room, directly from YouTube! Happy New Year full of miracles and discoveries everyone!

DSC_3314

Daniel Shiffman’s book “The Nature of Code” that Simon had already largely read online and now also reads before bed. It also comforted him recently when he was in pain, he cuddled up of the sofa with this big friendly tome on his lap.

dsc_33401897498871.jpg

Daniel Shiffman signed the book for Simon:

dsc_33441647117253.jpg

dsc_33421879896746.jpg

Siraj Raval stepped out of the YouTube screen straight into our Antwerp apartment on December 31. Simon has been following Siraj’s channel for months, learning about the types of neural networks and the math behind machine learning. It is thanks to Siraj’s explanations that Simon has been able to build his first neural nets :

DSC_3370

DSC_3364

Schermafbeelding 2018-01-05 om 02.07.56

Coding, Machine Learning, Milestones, Murderous Maths, neural networks, Python, Simon teaching

The Neural Nets are here!

Simon has started building neural networks in Python! For the moment, he has succeeded in making two working neural nets (a Perceptron and a Feed Forward neural net). He used the sigmoid activation function for both. The code partially derived from Siraj Raval’s “The Math of Intelligence” tutorials.

ML Perceptron 10 Dec 2017

The FF was tougher to build:

Simon’s nets run locally (on our home pc), but he will need more computational power for the more complex future projects, so he signed up to this wonderful online resource called FloydHub! FloydHub is sort of a heroku for deep learning, a Platform-as-a-Service for training and deploying deep learning models in the cloud. It uses Amazon, which Simon could, too, but it would have been a lot more expensive and tedious work to set up.

Simon’s next step will be another supervised learning project, a Recurrent Neural Net that will generate text. He has already started building it and fed it one book to read! In this video he explains how character-based text generators work: