Coding, Logic, Math and Computer Science Everywhere, Milestones, Simon teaching, Simon's sketch book, Together with sis

Solving Logical Puzzles

The end of 2019 was packed with logic. Simon even started programming an AI that would solve logical puzzles, here is the beginning of this unfinished project (he switched to programming a chess AI instead). In the two vids below, he explains the puzzle he used as an example and outlines his plan to build the AI (the puzzles come from Brilliant.org):

And here are some impressions of Simon working on the puzzles and showing them to his sis:

Computer Science, Good Reads, Logic, Machine Learning, Notes on everyday life, Philosophy, Set the beautiful mind free

A Universal Formula for Intelligence

Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ.

Prior to a World Science Scholars live session on November 25, Simon had been asked to watch this TED talk given by a prominent computer scientist and entrepreneur, Alex Wissner-Gross, on intelligent behavior and how it arises. Upon watching the talk, Simon and I have discovered that the main idea presented by Wissner-Gross can serve as a beautiful scientific backbone to self-directed learning and explain why standardized and coercive instruction contradicts the very essence of intelligence and learning.

Alex Wissner-Gross:

What you’re seeing is probably the closest equivalent to an E = mc² for intelligence that I’ve seen. So what you’re seeing here is a statement of correspondence that intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, tau. In short, intelligence doesn’t like to get trapped. Intelligence tries to maximize
future freedom of action and keep options open.
And so, given this one equation, it’s natural to ask, so what can you do with this? Does it predict artificial intelligence?

Recent research in cosmology has suggested that universes that produce more disorder, or “entropy,” over their lifetimes should tend to have more favorable conditions for the existence of intelligent
beings such as ourselves. But what if that tentative cosmological connection between entropy and intelligence hints at a deeper relationship? What if intelligent behavior doesn’t just correlate with the production of long-term entropy, but actually emerges directly from it?

As an example, Wissner-Gross went on to demonstrate a software engine called Entropica, designed to maximize the production of long-term entropy of any system that it finds itself in. Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks, all without being instructed to do so. Note that Entropica wasn’t given learning goals, it simply decided to learn to balance a ball on a pole (just like a child decides to stand upright), decided to use “tools”, decided to apply cooperatvive ability in a model experiment (just like animals sometimes pull two cords simultaneously to release food), taught itself to play games, network orchestration (keeping up connections in a network), solve logistical problems with the use of a map. Finally, Entropica spontaneously discovered and executed a buy-low, sell-high strategy on a simulated range traded stock, successfully growing assets. It learned risk management.

The urge to take control of all possible futures is a more fundamental principle than that of intelligence, that general intelligence may in fact emerge directly from this sort of control-grabbing, rather than vice versa.

In other words, if you give the agent control, it becomes more intelligent.

“How does it seek goals? How does the ability to seek goals follow from this sort of framework? And the answer is, the ability to seek goals will follow directly from this in the following sense: just like you would travel through a tunnel, a bottleneck in your future path space, in order to achieve many other diverse objectives later on, or just like you would invest in a financial security, reducing your short-term liquidity in order to increase your wealth over the long term, goal seeking emerges directly from a long-term drive to increase future freedom of action”.

The main concept we can pass on to the new generation to help them build artificial intelligences or to help them understand human intelligence, according to Alex Wissner-Gross is the following: “Intelligence should be viewed as a physical process that tries to maximize future freedom of action and avoid constraints in its own future. Intelligence is a physical process that resists future confinement”.

Simon’s reaction to Alex Wissner-Gross’s TED Talk was: “But this means school only makes you less intelligent!” (in the sense that school reduces your chances at seeking goals yourself, introduces constraints on your future development).

Simon asking his question during the live session with neuroscientist Suzana Herculano-Houzel

During the actual live session, neuroscientist Suzana Herculano-Houzel, famous for inventing a method to count the exact number of neurones in the human brain and comparative studies of various species, defined intelligence as behavioral and cognitive flexibility. Flexibility as a choice to do something else than what would happen inevitably, no longer being limited to purely responding to stimuli. Flexibility in decisions that allow you to stay flexible. Generically speaking, the more flexibility the more intelligence.

Animals with a cerebral cortex gained a past and a future, Professor Herculano-Houzel explained. Learning is one of the results of flexible cognition. Here learning is understood as solving problems. Hence making predictions and decisions is all about maximizing future flexibility, which in turn allows for more intelligence and learning. This is very important guideline for educational administrations, governments and policy makers: allowing for flexibility. There is a problem with defining intelligence as producing desired outcomes, Herculano-Houzel pointed out while answering one of the questions from students.

Replying Simon’s question about whether we can measure intelligence in any way and what the future of intelligence tests could be like, Professor Herculano-Houzel said she really liked Simon’s definition of IQ testing as a “glorified dimensionality reduction”. Simon doesn’t believe anything multidimensional fits on a bell curve and can possibly have a normal distribution.

Professor Herculano-Houzel’s answer:

Reducing a world of capacities and abilities into one number, you can ask “What does that number mean?” I think you’d find it interesting to read about the history of the IQ test, how it was developed and what for, and how it got coopted, distorted into something else entirely. It’s a whole other story. To answer your question directly, can we measure intelligence? First of all, do you have a definition for intelligence? Which is why I’m interested in pursuing this new definition of intelligence as flexibility. If that is an operational definition, then yes, we can measure flexibility. How do we measure flexibility?

Professor went on to demonstrate several videos of researches giving lemurs and dogs pieces of food partially covered by a plastic cylinder. The animals would have to figure it out on their own how to get to the treat.

You see, the animal is not very flexible, trying again and again, acting exactly as before. And the dog that has figured it out already made its behavior flexible. It can be measured how long it takes for an animal to figure out that it has to be flexible, which you could call problem solving. Yes, I think there are ways to measure that and it all begins with a clear definition of what you want to measure.

As a side note, Professor Herculano-Houzel also mentioned in her course and in her live session that she had discovered that a higher number of neurons in different species was correlated with longevity. Gaining flexibility and a longer life, it’s like having the cake and eating it! We are only starting to explore defining intelligence, and it’s clear that the biophysical capability (how many neurons one has) is only a starting point. It is through our experiences of the world that we gain our ability and flexibility, that is what learning is all about, Professor concluded.

Coding, Computer Science, Experiments, JavaScript, Logic, Murderous Maths, Simon teaching, Simon's sketch book

Nash Equilibrium

Simon explaining the Nash Equilibrium with a little game in p5.js. Play it yourself at: https://editor.p5js.org/simontiger/sketches/lfP4dKGCs
Inspired by TedEd video Why do competitors open their stores next to one another? by Jac de Haan.

Computer Science, Crafty, Logic, Simon's sketch book

Simon crafting a search engine with sticky notes

Simon working on a simplified version of a search engine, including just a few documents, and performing calculations to determine how many searches one should do to make creating an index of all the documents efficient (something he has picked up in Brilliant.org’s Computer Science course.

screenshot from Brilliant.org’s Computer Science course
Computer Science, Contributing, Group, Logic, Math and Computer Science Everywhere, Milestones, Murderous Maths, Notes on everyday life

Brilliant Discussions

This is an example of the learning style that Simon enjoys most. He really likes doing the daily challenges on Brilliant.org. He later sometimes discusses them with other participants or even writes wikis!

Simon writing an explanation on Brilliant.org’s discussion page about a Computer Science Fundamentals daily challenge. Link to the full discussion: https://brilliant.org/daily-problems/what-variable-1/
The problem and Simon’s answer
Simon’s contribution to the discussion
Crafty, Geometry Joys, Good Reads, Logic, Murderous Maths, Simon teaching, Simon's sketch book

Attractiveness vs. Personality

Debunking the stereotype that all attractive guys/girls are mean, something Simon has learned from MajorPrep and the How Not to Be Wrong book by Jordan Ellenberg. The slope in dark blue pen shows our scope of attention, a pretty narrow part of the actually diverse field of choices.
Computer Science, Crafty, Logic, Math and Computer Science Everywhere, Murderous Maths, Simon teaching, Simon's sketch book, Together with sis

The Diffe-Hellman key exchange algorithm

This is Simon explaining Diffe-Hellman key exchange (also called DiffeHellman protocol). He first explained the algorithm mixing watercolours (a color representing a key/ number) and then mathematically. The algorithm allows two parties (marked “you” and “your friend” in Simon’s diagram) with no prior knowledge of each other to establish a shared secret key over an insecure channel (a public area or an “eavesdropper”). This key can then be used to encrypt subsequent communications using a symmetric keycipher. Simon calls it “a neat algorithm”). Later the same night, he also gave me a lecture on a similar but more complicated algorithm called the RSA. Simon first learned about this on Computerphile and then also saw a video about the topic on MajorPrep. And here is another MajorPrep video on modular arithmetic.

originally there are two private keys (a and b) and one public key g
Neva helping Simon to mix the colors representing each key
Mixing g and b to create the public key for b
Mixing the public and the private keys to create a unique shared key
Done!Both a and b have a unique shared key (purplish)
Simon now expressed the same in mathematical formulas
Simon explained that the ≡ symbol (three stripes) means congruence in its modular arithmetic meaning (if a and b are congruent modulo n, they have no difference in modular arithmetic under modulo n)
Computer Science, Electronics, Geometry Joys, Logic, Math and Computer Science Everywhere, Murderous Maths, Notes on everyday life, Simon's sketch book, Trips

Doing math and computer science everywhere

One more blog post with impressions from our vacation at the Cote d’Azur in France. Don’t even think of bringing Simon to the beach or the swimming pool without a sketchbook to do some math or computer science!

This is something Simon experimented with extensively last time we were in France. Also called the block-stacking or the book-stacking problem.
Simon wrote this from memory to teach another boy at the pool about ASCII binary. The boy actually seemed to find it interesting. A couple days later two older boys approached him at the local beach and told him that they knew who he was, that he was Simon who only talked about math. Then the boys ran away and Simon ran after them saying “Sorry!” We have explained to him that he doesn’t have to say sorry for loving math and for being the way he is.
Drinking a cocktail at the beach always comes with a little lecture. This time, the truth tables.
history, Logic, Milestones, Murderous Maths, Notes on everyday life, Philosophy

Simon on: Will we ever live in a pure mathematical world?

In reaction to Yuval Noah Harari’s book Homo Deus (the part about humans evolving to break out of the organic realm and possibly breaking out of planet Earth):

When you cross the street there’s always a risk that an accident will happen that has a non-zero probability. If you live infinitely long, anything that has a non-zero probability can happen infinitely many times in your life. For example, if the event we are talking about is an accident, the first time it will happen in your life, you’re already dead. So when you cross the street and want to live infinitely long there’s a risk that an accident will happen and you die. So we come to the conclusion, that if you want to live infinitely long it’s not worth crossing the street. But there’s always a risk that you die, so if you live infinitely long, it’s not actually worth living. So we’ve got a little bit of a problem here. Unless you come to the more extreme idea of detaching yourself from the physical world all together. And I’m not talking about the sort of thing that you don’t have a body, but somehow still exist in the physical world. I mean literally that you live in a pure mathematical world. Because in mathematics, you can have things that have zero probability of happening. You can have something definitely happening and you can also have something that is definitely not happening.

However, there’s another thing. How does mathematics actually work? There are these things called axioms and it’s sort of built up from that. What if we even do away from those axioms? Then we can actually do anything in that mathematical world. And what I mean by anything is really anything that you can from any set of axioms that you can come up with. There’s a little bit of a problem with that, you can come to contradictions, it’s a little bit risky. We are really talking about the ultimate multiverse, we’re talking about quite controversial stuff here. The only way anyone can come up with this is by pushing to the extremes.

Computer Science, Crafty, Electricity, Electronics, Engineering, Logic, Milestones, motor skills, Simon teaching

Simon building an 8-bit Computer from scratch. Parts 1 & 2.

Parts 1 and 2 in Simon’s new series showing him attempting to build an 8-bit computer from scratch, using the materials from Ben Eater’s Complete 8-bit breadboard computer kit bundle.

Simon is learning this from Ben Eater’s playlist about how to build an 8-bit computer.

In Part 1, Simon builds the clock for the computer
In Part 2, Simon builds the A register (more registers to follow).
these little black things are an inverter (6 in one pack), AND gate and OR gate (4 AND and OR gates in one pack)
this schematic represents the clock of the future 8-bit computer
Simon and Neva thought the register with its LED lights resembled a birthday cake