This blog is about Simon, a young gifted mathematician and programmer, who had to move from Amsterdam to Antwerp to be able to study at the level that fits his talent, i.e. homeschool. Visit https://simontiger.com
December was all about computer science and machine learning. Simon endlessly watched Welch Labs fantastic but freakishly challenging series Learning to See and even showed me all the 15 episodes, patiently explaining every concept as we went along (like underfitting and overfitting, recall, precision and accuracy, bias and variance). Below is the table of contents he made of the series:
While watching the series, he also calculated the solutions to some of the problems that Welch Labs presented, like the question about the number of possible rules (= grains of sand) for a simple ML problem if memorisation is applied. His answer was that the grains of sand would cover all land on earth:
Simon loved the historical/philosophical part of the course, too. Especially the juxtaposition of memorising vs. learning, the importance of learning to make assumptions, futility of bias-free learning, and the beautiful quotes from Richard Feynman!
I have since then found another Feynman quote that fits Simon’s learning style perfectly (and I believe is the recipe to anyone’s successful learning as opposed to teaching to the test): “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” We have discussed the possibilities of continuing at the university again. I have also asked Simon how he sees himself applying his knowledge down the road, trying to understand what academic or career goals he may have set for himself, if any. Does he have a picture of himself in five years from now, where does he want to be by then? He got very upset, just like when asked to sum himself up in one sentence for an interview last spring. “Mom, I’m just having fun!”
The end of 2019 was packed with logic. Simon even started programming an AI that would solve logical puzzles, here is the beginning of this unfinished project (he switched to programming a chess AI instead). In the two vids below, he explains the puzzle he used as an example and outlines his plan to build the AI (the puzzles come from Brilliant.org):
And here are some impressions of Simon working on the puzzles and showing them to his sis:
I’ve been terrible at keeping this blog up to date. One of Simon’s best project in December 2019 was creating a chess robot and I haven’t even shared it here.
We were joking how this is Simon’s baby and her name is Chessy. Simon also made an improved version with a drop-down menu allowing to choose 1 to 5 steps ahead difficulty level (warning: levels 4 and 5 may run quite slowly): https://chess-ai-user-friendly–simontiger.repl.co/
In reaction to Yuval Noah Harari’s book Homo Deus (the paragraph about the a-mortals anxious about dying in an accident):
With individual intelligences, you can have the car that’s driving down the street not knowing that you are going to be crossing the street at that point in time and then poof! You got yourself an accident. With collective intelligence though, that doesn’t happen. Because the whole definition of knowing something or not knowing something breaks down. The members of collective intelligence don’t have the notion of knowing something. It’s only the “central intelligence” that the members are hooked up to that has the notion of knowing something. Which means that you can have the central intelligence deciding that a car driving down the street does not create an accident with the person crossing the street.
The story written by the AsiBot and Dutch bestselling author Ronald Giphart forms a new, 10th chapter in Isaac Asimov’s classic I, Robot (that originally contained only 9 chapters). The AsiBot was fed 10 thousand books in Dutch to master the literary language and can already produce a couple of paragraphs on its own, but a longer coherent story remains out of fetch. This is where a human writer, Ronald Giphart stepped in. It was he who decided which of the sentences written by AsiBot stayed and which should be thrown out. The reader doesn’t know which sentences are written (or edited) by the human writer and which are pure robot literature. Starting from November 6 anyone (speaking Dutch) can try writing with AsiBot on www.asibot.nl.
Simon was very excited about this news and recorded a short video where he explains how such “synthetic literature”neural nets work (based on what he learned from Siraj Raval’s awesome YouTube classes):
My phone froze so we had to make the second part as a separate video:
This is one of Simon’s most enchanting and challenging projects so far: working on his own little AIs. As I’ve mentioned before, when it comes to discussing AI, Simon is both mesmerized and frightened. He watches Daniel Shiffman’s neural networks tutorials twenty times in a row and practices his understanding of the mathematical concepts underlying the code (linear regression and gradient descent) for hours. Last week, Simon built a perceptron of his own. It was based on Daniel Shiffman’s code, but Simon added his own colors and physics, and played around with the numbers and the bias. You can see Simon working on this project step by step in the six videos below.
His original plan was to build two neural networks that would be connected to each other and communicate, he has only built one perceptron so far.
In the videos below, Simon is building a Codota demo in Java. Codota is an AI programming assistant that is looking for solutions on GitHub and other global resources and suggests them in real time, recognizing your code. At the moment, it’s only available for Java and only for three editors (here – Eclipse), so the use is very limited, but their website says that other languages will follow soon. Since Simon normally uses Processing for Java, he can’t really use Codota for most of his projects. It has been an interesting exercise though (and I was surprised at how skillful he is at writing Java in Eclipse, which is quite different from Processing), and a glimpse into the future. There’s no doubt assistants such as Codota will very soon become a common companion. Simon had Codota resolve one error for him and was very happy about that. He said Codota was his friend. He was reluctant to turn its speech functions on, however. Simon has this slight fear of full blown AI and a fascination, wanting to learn how it works, at the same time.
“Mommy! Genetic algorithm is AI, ML and DL all at the same time! Scary information. It’s scary information for me,” Simon stares at me, a brand new Daniel Shiffman tutorial on intelligence and learning paused on the screen. I come up to my little boy and hold him. We talk about AI and his fears. Does he sense the grandeur, the tsunami of technological change that is about to engulf us? “I quite like my life as it currently is”, he once told me during a similar conversation while trying to pinpoint why he sometimes feels afraid of AI.
After a short break he resumes watching Daniel Shiffman talk about the final and the most exciting chapter of his book The Nature of Code. Later the same evening Simon attempts to write a genetic algorithm code. He hasn’t finished yet when I call him to bed. He file-saves the code to resume tomorrow and sighs: “Last night, the sleeping lasted so long!” On his screen, I see written in Java: