Coding, Community Projects, Computer Science, Contributing, Geometry Joys, JavaScript, Lingua franca, live stream, Machine Learning, Math and Computer Science Everywhere, Math Riddles, Math Tricks, Milestones, Murderous Maths, Notes on everyday life, Physics, Python, Set the beautiful mind free, Simon makes gamez, Simon teaching, Simon's Own Code, Simon's sketch book, Thoughts about the world

New Friends. New Horizons.

This is a little video compilation of a few moments I captured Simon talking to his new peers in April 2020

Thanks to the lock-down, Simon’s got new friends. For a little over a month now, he has been part of exciting daily discussions, challenging coding sessions and just playing together with his new gang (warning: playing always involves math). We’ve never seen him like this before, so drawn to socializing with his peers, even taking the lead in some meetings and initiating streams.

And then we realized: this is how social Simon is once he meets his tribe and can communicate in his language, at his level. Most of his new friends are in their late teens and early twenties. Most of them didn’t use to hang out together before the crisis, probably busy with college, commuting, etc. The extraordinary circumstances around covid-19 has freed up some extra online time for many talented young people, creating better chances to meet like-minded peers across the world. Finally, Simon has a group of friends he can really relate to, share what he is working on, ask for constructive help. And even though he is the youngest in the group, he is being treated as an equal. It’s beautiful to overhear his conversations and the laughter he shares with the guys (even though sometimes I wish he wasn’t listening to a physics lecture simultaneously, his speakers producing a whole cacophony of sound effects, but he likes it that way and seems to be able to process two incoming feeds at once).

Last week, Simon took part in a World Science Scholars workshop by Dr. Ruth Gotian, an internationally recognized mentorship expert. The workshop was about, you guessed it, how to go about finding a mentor. One of the things that struck me most in Dr. Gotian’s presentation was her mentioning the importance of ‘communities of practice’. I looked it up on Etienne Wenger’s site (the educational theorist who actually came up with the term in the 1990s):

A community of practice is a group of people who share a concern or a passion for something they do, and learn how to do it better as they interact regularly. This definition reflects the fundamentally social nature of human learning. It is very broad. It applies to a street gang, whose members learn how to survive in a hostile world, as well as a group of engineers who learn how to design better devices or a group of civil servants who seek to improve service to citizens. their interactions produce resources that affect their practice (whether they engage in actual practice together or separately).

It is through the process of sharing information and experiences with the group that members learn from each other, and have an opportunity to develop personally and professionally, Wenger wrote in 1991. But communities of practice isn’t a new thing. In fact, it’s the oldest way to acquire and imperfect one’s skills. John Dewey relied on this phenomenon in his principle of learning through occupation.

It has been almost spooky to observe this milestone in Simon’s development and learn the sociological term for it the same month, as if some cosmic puzzle has clicked together.

Of course, it would be a misrepresentation to say nothing of the internal conflict the new social reality unveiled in my mothering heart as I struggled to accept that Simon started skipping Stephen Wolfram’s livestreams in favour of coding together with his new friends. 👬Yet even those little episodes of friction we experienced have eventually led to us understand Simon better. We sat down for what turned into a very eye-opening talk, which involved Simon asking me to take down the framed Domain of Science posters we’d recently put up above his desktop and pointing to those infographics depicted on the posters that represented the areas of his greatest interest.

we got our posters at the DFTBA shop

Simon simply guided us through the Doughnut of Knowledge, Map of Physics, Map of Computer Science and Map of Mathematics posters as if were on tour inside his head. And he made it clear to us that he seriously preferred pure mathematics, theoretical computer science and computer architecture and programming to applied mathematics (anything applied, really) and even computational physics, even though he genuinely enjoyed cosmology and Wolfram’s books.

“Mom, you always think that what you’re interested in is also what I’m interested in”, he told me openheartedly. It was at that moment it hit me he had grown up enough to gain a clearer vision of his path (or rather, his web). That I no longer needed to absolutely expose him to a broadest possible plethora of the arts and sciences within the doughnut of knowledge, but that from now on, I can trust him even more as he ventures upon his first independent steps in the direction he has chosen for himself, leaning back on me when necessary.

So far, in just one month, Simon has led a live covid-19 simulation stream, programming in JS as he got live feedback from his friends, cooperated on a 3D rendering engine in turtle (đŸ€Ż), co-created Twitch overlays, participated in over a hundred Clashes of Code (compelling coding battles) and multiple code katas (programming exercises with a bow to the to the Japanese concept of kata in the martial arts).

example of a Clash of Code problem
e of a Clash of Code problem
This is Simon’s code in one of the Clashes of Code (he won this round from 6 other players). Such programming battles last somewhere between 5 and 15 minutes and come in three modes: fastest mode (in which you must complete the puzzle as fast as possible), shortest mode (you need to write the lowest code size) and reverse mode (you have to guess what to do by observing the provided set of tests). Simon especially likes the last mode, because you have to find the code by finding patterns in the given test cases, which appeals to his mathematical talent.
CJ from the Coding Garden discussing Simon’s solution
Simon working on a 3D renderer project together with his friends
Simon came up with a plan to work on the 3D renderer

Last month, ten young programmers including Simon formed a separate “Secret Editors’ Club Riding Every Train” group on Discord, uniting some “nice and active” people who met on The Coding Train channel (they also included Dan Shiffman in the group). Simon really enjoys long voice chats with the other secret editors, going down the rabbit holes of math proofs and computer algorithms. Last Tuesday, he was ecstatic recounting his 3-hour call with his new peer Maxim during which Simon managed to convince Maxim that 0.999… equals 1 by “presenting a written proof that involved Calculus”:

We even talked about infinity along the way, aleph null and stuff. There was a part where he almost won, because of the proof I showed him when we talked about infinities. I was almost stumped.

The guys have now inspired Simon to take part in the Spring Challenge 2020 on CodingGame.com, a whole new adventure. To us, the lockdown experience has felt like an extra oxygen valve gone open in our world, another wall gone down, another door swung open, all allowing Simon to breathe, move and see a new horizon.

Simon trying to explain why he didn’t fulfil a promise, he has finally found the people who speak his language 🙂
Computer Science, Machine Learning, Math and Computer Science Everywhere, Milestones, Murderous Maths, neural networks, Notes on everyday life, Simon's sketch book

Math for Neural Networks and Calculus Fundamentals via Brilliant.org

A little over a month ago, Simon picked up neural networks again (something he had tried a while ago but couldn’t grasp intuitively). He started the Artificial Neural Networks course on Brilliant.org and covered vectors, matrices, optimisation, perceptrons and multilayer perceptrons fairly quickly and even built his first perceptron in Python from scratch (will publish a video about this project shortly). As soon as he reached the chapter on Backpropagation, however, he realised his current knowledge of Calculus wasn’t enough. This is how Simon, completely on his own, decided to get back to studying Calculus (something he lost interest in last year). After gulping up several chapters of the Calculus Fundamentals course, Simon told me he was now ready to do Backpropagation (nearly done now). On to the convolutional neural networks (the next chapter in the course)!

As of today, these are his progress stats:

Below are some impressions of doing Calculus Fundamentals.

On Saturday, March 7 Simon yelled: “Now I understand it! The Chain Rule!” — “But I remember you tried to explain Chain Rule to me a while ago”, I said. — “But I didn’t understand it intuitively!”
Since the lockdown began, Simon and his math tutor have started zooming instead of the biweekly sessions at our place.
Logic, Machine Learning, Milestones, Murderous Maths, Notes on everyday life, Set the beautiful mind free, Simon's sketch book

Learning to See. On Machine Learning and learning in general.

December was all about computer science and machine learning. Simon endlessly watched Welch Labs fantastic but freakishly challenging series Learning to See and even showed me all the 15 episodes, patiently explaining every concept as we went along (like underfitting and overfitting, recall, precision and accuracy, bias and variance). Below is the table of contents he made of the series:

While watching the series, he also calculated the solutions to some of the problems that Welch Labs presented, like the question about the number of possible rules (= grains of sand) for a simple ML problem if memorisation is applied. His answer was that the grains of sand would cover all land on earth:

Simon loved the historical/philosophical part of the course, too. Especially the juxtaposition of memorising vs. learning, the importance of learning to make assumptions, futility of bias-free learning, and the beautiful quotes from Richard Feynman!

screenshot from Welch Labs Learning to See [Part 5: To Learn is to Generalize]

I have since then found another Feynman quote that fits Simon’s learning style perfectly (and I believe is the recipe to anyone’s successful learning as opposed to teaching to the test): “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” We have discussed the possibilities of continuing at the university again. I have also asked Simon how he sees himself applying his knowledge down the road, trying to understand what academic or career goals he may have set for himself, if any. Does he have a picture of himself in five years from now, where does he want to be by then? He got very upset, just like when asked to sum himself up in one sentence for an interview last spring. “Mom, I’m just having fun!”

A beautiful humbling lesson for me.

Computer Science, Good Reads, Logic, Machine Learning, Notes on everyday life, Philosophy, Set the beautiful mind free

A Universal Formula for Intelligence

Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ.

Prior to a World Science Scholars live session on November 25, Simon had been asked to watch this TED talk given by a prominent computer scientist and entrepreneur, Alex Wissner-Gross, on intelligent behavior and how it arises. Upon watching the talk, Simon and I have discovered that the main idea presented by Wissner-Gross can serve as a beautiful scientific backbone to self-directed learning and explain why standardized and coercive instruction contradicts the very essence of intelligence and learning.

Alex Wissner-Gross:

What you’re seeing is probably the closest equivalent to an E = mcÂČ for intelligence that I’ve seen. So what you’re seeing here is a statement of correspondence that intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, tau. In short, intelligence doesn’t like to get trapped. Intelligence tries to maximize
future freedom of action and keep options open.
And so, given this one equation, it’s natural to ask, so what can you do with this? Does it predict artificial intelligence?

Recent research in cosmology has suggested that universes that produce more disorder, or “entropy,” over their lifetimes should tend to have more favorable conditions for the existence of intelligent
beings such as ourselves. But what if that tentative cosmological connection between entropy and intelligence hints at a deeper relationship? What if intelligent behavior doesn’t just correlate with the production of long-term entropy, but actually emerges directly from it?

As an example, Wissner-Gross went on to demonstrate a software engine called Entropica, designed to maximize the production of long-term entropy of any system that it finds itself in. Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks, all without being instructed to do so. Note that Entropica wasn’t given learning goals, it simply decided to learn to balance a ball on a pole (just like a child decides to stand upright), decided to use “tools”, decided to apply cooperatvive ability in a model experiment (just like animals sometimes pull two cords simultaneously to release food), taught itself to play games, network orchestration (keeping up connections in a network), solve logistical problems with the use of a map. Finally, Entropica spontaneously discovered and executed a buy-low, sell-high strategy on a simulated range traded stock, successfully growing assets. It learned risk management.

The urge to take control of all possible futures is a more fundamental principle than that of intelligence, that general intelligence may in fact emerge directly from this sort of control-grabbing, rather than vice versa.

In other words, if you give the agent control, it becomes more intelligent.

“How does it seek goals? How does the ability to seek goals follow from this sort of framework? And the answer is, the ability to seek goals will follow directly from this in the following sense: just like you would travel through a tunnel, a bottleneck in your future path space, in order to achieve many other diverse objectives later on, or just like you would invest in a financial security, reducing your short-term liquidity in order to increase your wealth over the long term, goal seeking emerges directly from a long-term drive to increase future freedom of action”.

The main concept we can pass on to the new generation to help them build artificial intelligences or to help them understand human intelligence, according to Alex Wissner-Gross is the following: “Intelligence should be viewed as a physical process that tries to maximize future freedom of action and avoid constraints in its own future. Intelligence is a physical process that resists future confinement”.

Simon’s reaction to Alex Wissner-Gross’s TED Talk was: “But this means school only makes you less intelligent!” (in the sense that school reduces your chances at seeking goals yourself, introduces constraints on your future development).

Simon asking his question during the live session with neuroscientist Suzana Herculano-Houzel

During the actual live session, neuroscientist Suzana Herculano-Houzel, famous for inventing a method to count the exact number of neurones in the human brain and comparative studies of various species, defined intelligence as behavioral and cognitive flexibility. Flexibility as a choice to do something else than what would happen inevitably, no longer being limited to purely responding to stimuli. Flexibility in decisions that allow you to stay flexible. Generically speaking, the more flexibility the more intelligence.

Animals with a cerebral cortex gained a past and a future, Professor Herculano-Houzel explained. Learning is one of the results of flexible cognition. Here learning is understood as solving problems. Hence making predictions and decisions is all about maximizing future flexibility, which in turn allows for more intelligence and learning. This is very important guideline for educational administrations, governments and policy makers: allowing for flexibility. There is a problem with defining intelligence as producing desired outcomes, Herculano-Houzel pointed out while answering one of the questions from students.

Replying Simon’s question about whether we can measure intelligence in any way and what the future of intelligence tests could be like, Professor Herculano-Houzel said she really liked Simon’s definition of IQ testing as a “glorified dimensionality reduction”. Simon doesn’t believe anything multidimensional fits on a bell curve and can possibly have a normal distribution.

Professor Herculano-Houzel’s answer:

Reducing a world of capacities and abilities into one number, you can ask “What does that number mean?” I think you’d find it interesting to read about the history of the IQ test, how it was developed and what for, and how it got coopted, distorted into something else entirely. It’s a whole other story. To answer your question directly, can we measure intelligence? First of all, do you have a definition for intelligence? Which is why I’m interested in pursuing this new definition of intelligence as flexibility. If that is an operational definition, then yes, we can measure flexibility. How do we measure flexibility?

Professor went on to demonstrate several videos of researches giving lemurs and dogs pieces of food partially covered by a plastic cylinder. The animals would have to figure it out on their own how to get to the treat.

You see, the animal is not very flexible, trying again and again, acting exactly as before. And the dog that has figured it out already made its behavior flexible. It can be measured how long it takes for an animal to figure out that it has to be flexible, which you could call problem solving. Yes, I think there are ways to measure that and it all begins with a clear definition of what you want to measure.

As a side note, Professor Herculano-Houzel also mentioned in her course and in her live session that she had discovered that a higher number of neurons in different species was correlated with longevity. Gaining flexibility and a longer life, it’s like having the cake and eating it! We are only starting to explore defining intelligence, and it’s clear that the biophysical capability (how many neurons one has) is only a starting point. It is through our experiences of the world that we gain our ability and flexibility, that is what learning is all about, Professor concluded.

Coding, Community Projects, Contributing, Experiments, JavaScript, live stream, Machine Learning, Milestones, Physics, Simon's Own Code

Simon’s Random Number Generator

This one’s back from mid-October, forgot to post here.

Simon created a random number generator that generates a frequency, and then picks it back up. Then, it calculates the error between the generated frequency and the picked up frequency. This is one of my community contributions for a Coding Train challenge: https://thecodingtrain.com/CodingChallenges/151-ukulele-tuner.html

Link to project: https://editor.p5js.org/simontiger/sketches/eOXdkP7tz
Link to the random number plots: https://www.wolframcloud.com/env/monajune0/ukalele%20tuner%20generated%20random%20number%20analysis.nb
Link to Daniel Shiffman’s live stream featured at the beginning of this vid: https://youtu.be/jKHgVdyC55M

plot of the random numbers generated by Simon’s ukulele tuner random number generator (plotted in Wolfram Mathematica)
Coding, In the Media, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Set the beautiful mind free, Simon's Own Code

Interview with Simon on Repl.it

Repl.it has published a cool interview with Simon! It was interesting how Simon struggling to answer some of the more general questions gave me another glimpse into his beautiful mind that doesn’t tolerate crude dimensionality reductions. The first question, “If you could sum yourself up in one sentence, how would you do it?” really upset him, because he said he just couldn’t figure out a way to sum himself up in one sentence. This is precisely the same reason why Simon has had trouble performing trivial oral English exam tasks, like picking some items from the list and saying why he liked or disliked them. The way he sees the world, some things are simply unfathomable, or in any case, extremely complex, too complex to imagine one can sum them up in one sentence or come up with the chain of causes and consequences of liking something on the spot. He often tells me he sees the patterns, the details. Seeing objects or events in such complexity may mean it feels inappropriate, irresponsible, plain wrong to Simon to reduce those objects and events to a short string of characters.

This made me reflect upon how Simon keeps shaking me awake. I used to find nothing wrong with playing the reductionist game and frankly, had I been asked to sum myself up in one sentence, I would have readily come up with something like “a Russian journalist and a home educator”. It’s thanks to Simon that I am waking up to see how inaccurate that is. I begin to see how many games that we play in our society are forcing us to zoom out too far, to generalize too much. How often don’t we just plug something in, pretending we can answer impossible questions about the hugely complicated world around us and inside us! Well, Simon often honestly tells me that he just doesn’t have the answer.

For that first question in the interview, I suggested Simon answer something like “it’s more difficult to sum myself up in one sentence than to prove that e is irrational”, to which he replied: “But Mom, to prove that e is irrational is easy! It’s hard to prove that Pi is irrational!”

I must add that at the same time, Simon has really enjoyed the fact that Repl.it has written a developer spotlight about him as well as the social interaction on Twitter that the piece has initiated. It gave him a tangible sensation of belonging to the programming community, of being accepted and appreciated, and inspired him to work on his new projects in Repl.it.

Coding, JavaScript, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Set the beautiful mind free, Simon teaching, Simon's Own Code, Simon's sketch book

What Simon did instead of taking the SAT on Saturday

On Saturday morning, Simon didn’t go to the SAT examination location, although we had registered him to try taking the SAT (with great difficulties, because he is so young). In the course of the past few weeks, after trying a couple of practice SAT tests on the Khan Academy website, we have discovered that the test doesn’t reveal the depth of Simon’s mathematical talent (the tasks don’t touch the fields he is mostly busy with, like trigonometry, topology or calculus and require that instead, he solves much more primitive problems in a strictly timed fashion, while Simon prefers taking time to explore more complex projects). This is what happens with most standardized tests: Simon does have the knowledge but not the speed (because he hasn’t been training these narrow skills for hours on end as his older peers at school have). Nor does he have the desire to play the game (get that grade, guess the answers he deosn’t know), he doesn’t see the point. What did he do instead on his Saturday? He had a good night sleep (instead of having to show up at the remote SAT location at 8 a.m.) and then he…

built an A.I. applying a genetic algorithm, a neural network controlling cars moving on a highway! The cars use rays to avoid the walls of the highway. Implementing neuroevolution. What better illustration does one need to juxtapose true achievement and what today’s school system often wants us to view as achievemnt – getting a high grade on a test? The former is a beautiful page from Simon’s portfolio, showing what he really genuinely can do, a real life skill, something he is passionately motivated to explore deeper, without seeking a reward, his altruist contribution to the world, if you will. The latter says no more than how well one has been trained to apply certain strategies, in a competitive setting.

Simon’s code is online: https://repl.it/@simontiger/Raytracing-AI

Simon has put this version on GitHub: https://github.com/simon-tiger/Raycasting-A.I.

He has also created an improved version with an improved fitness function. “In the improved version, there’s a feature that only shows the best car (and you can toggle that feature on and off). And most importantly, I am now casting relative to where it’s going (so the linearity is gone, but it jiggles a lot, so I might linear interpolate it)”, – Simon explains. You can play with the improved version here: https://repl.it/@simontiger/Raycasting-AI-Improved

Finally, Simon is currently working on a version that combines all the three versions: the original, the improved and the version with relative directions (work in progress): https://repl.it/@simontiger/Raytracing-AI-Full

“I am eventually going to make a version of this using TensorFlow.js because with the toy library I’m using now it’s surprisingly linear. I’m going to put more hidden layers in the network”.

The raytracing part of the code largely comes from Daniel Shiffman.

Simon’s two other videos about this project, that was fully completed in one day:

Part 1
Part 2


Coding, Logic, Machine Learning, Milestones, Murderous Maths, Notes on everyday life, Simon's Own Code

Domain Coloring with Complex Functions in Wolfram Mathematica

Simon has been completely carried away by Wolfram Mathematica. He keeps starting new projects, just to try something out. After working on his Knot Theory book for days, and making beautiful illustrations in Wolfram, he switched over to domain coloring. The images below are some impressions of his experimenting with the color function. He hasn’t applied the complex function yet.

Another new project he started has been Poisson disc sampling.

“Wolfram is the most advanced language! It has most built-in stuff in it! At Wolfram, they are working so hard, that the knowledge base is changing every second!” Simon screams out as he pauses the Elementary Introduction to Wolfram Language book (he was reading it at first and now binge watches it as a series of video tutorials). Simon has been especially blown away by the free-form linguistic input: “From plane English it somehow computes the results and maybe even native Mathematica Syntax!” Wolfram also “has an entire section which is machine learning!”

Simon has started a free trial about a week ago, but I guess we’re buying a subscription.

Starting out
Only brightness left to do
Color function done
Coding, Coding Everywhere, Group, Machine Learning, Milestones, Murderous Maths, Notes on everyday life, Physics

Fluid Dynamics: Laughing and Crying

Simon was watching Daniel Shiffman’s live coding lesson on Wednesday, and when fluid dynamics and Navier-Stokes equations came up (describing the motion of fluid in substances and used to model currents and flow), Simon remarked in the live chat that the Navier–Stokes equations are actually one of the seven most important unsolved math problems and one can get a million dollar prize for solving them, awarded by the Clay Mathematics Institute.

(I looked this up on Wikipedia and saw that it has not yet been proven whether solutions always exist in 3D and, if they do exist, whether they are “smooth” or infinitely differentiable at all points in the domain).

We had read an in-depth history of the Navier–Stokes equations in Ian Stewart’s book several weeks ago, but I must confess I didn’t remember much of what we’d read anymore. “Is it that chapter where Stewart describes how Fourier’s paper got rejected by the French Academy of Sciences because his proof wasn’t rigid enough?” I asked Simon. – “No, Mom, don’t you remember? That was Chapter 9 about Fourier Transform! And the Navier-Stokes equations was Chapter 10!” – “Oh, and the Fourier Transform was also the one where there was a lot about the violin string, right?” – “No!”, – Simon really laughs at me by now, – “That was in Chapter 8, about the Wave Function! You keep being one chapter behind in everything you say!” Simon honestly finds it hilarious how I can’t seem to retain the information about all of these equations after reading it once. I love his laugh, even when he’s laughing at me.

Today though, he was weeping inconsolably and there was nothing I could do. Daniel Shiffman had to cancel the live session about CFD, computer fluid dynamics. Simon had been waiting impatiently for this stream. My guess, because it’s his favourite teacher talking about something interesting from a purely mathematical view, a cocktail of all things he enjoys most. And because he never seems to be able to postpone the joy of learning. He had explained to me once that if he has this drive inside of him to conduct a certain experiment or watch a certain tutorial now, he simply can’t wait, because later he doesn’t seem to get the same kick out of it anymore.

I’m baking Simon’s favourite apple pie to pep him up. Here are a couple more screen shots of him taking part in the Wednesday lesson:

Coding, Community Projects, Contributing, JavaScript, live stream, Machine Learning, Milestones, Murderous Maths, neural networks, Notes on everyday life, Set the beautiful mind free, Simon teaching, Trips

Simon took part in a Coding Train livestream in Paris!

Simon and Daniel Shiffman after the livestream

The video below is part of Daniel Shiffman’s livestream hosted by GROW Le Tank in Paris on 6 January 2019 about KNN, machine learning, transfer learning and image recognition. Daniel kindly allowed Simon to take the stage for a few minutes to make a point about image compression (the algorithm that Daniel used was sort of a compression algorithm):

Here is a different recording (in two parts) of the same moment from a different angle: