After a whole night working on my writing and not feeling very fresh in the morning, I told Simon about the three ages of life: the young age is when one can party all night long and the next morning feel like one has been sleeping like a rose, the middle age is when one parties all night long and the next morning feels like one had been partying all night long, and the old age is when has been sleeping all night long and the next morning feels like one has been partying all night long. He immediately drew these pictures, telling me it’s just like *1-input 1-output logic gates*, but the only one that makes sense is the *OR*.

# Category Archives: Logic

# Brilliant’s Daily Challenges

Simon is doing an increasing load of Brilliant’s daily challenges.

Some more recent challenges:

# Simon’s graph theory thoughts about the overpopulation problem

In a complete binary tree, every node has two children (except for the bottom nodes that don’t have any children at all). This means one mind-blowing thing: that the bottom row always has more nodes than the number of nodes in the entire rest of the tree! Example: if there’s one node at the top of the tree, two nodes in the second row, four nodes in the third row and eight nodes in the bottom row, the bottom row has more nodes (8) than the remaining part of tree (7). I’ve been thinking about this, and I applied this to the real world:

The average number of children a parent has in the world is 2.23 (I’ve used an arithmetic mean, which is oversimplistic, should have probably used the harmonic mean). Does this mean that currently, the number of children exceeds the number of parents? The definition of “children” I’m using are people who don’t have children, so the last row of nodes so to speak. By “parents” I’m counting all generations. If you just want to talk about now, the parents living now, then you have to trim the top rows (the already dead generations). If the average number of children is 2 or more, are there going to be more children in the world than parents?

Well, in this model, I’m ignoring crossover. This means we should consider every node in our tree for 2 people*. So, now, if the average number of children is **4** or more, there’re going to be more children than parents. So, what I said earlier was wrong. The average number of people *doesn’t* exceed 4, so there aren’t more children than parents. But the number of children today may still exceed the number of parent generations still alive.

# Mind Your Decisions

For over a month, Simon has been fascinated by Presh Talwalkar’s channel Mind Your Decisions. The channel is full of short videos on famous math problems, logic riddles, proofs and mental math tricks. Simon has also ordered a compilation of Talwalkar’s five most interesting books, including “The Joy of Game Theory: An Introduction to Strategic Thinking”, that we are currently very much enjoying together, and four more, that Simon is reading on his own: “40 Paradoxes in Logic, Probability, and Game Theory”, “The Irrationality Illusion: How To Make Smart Decisions And Overcome Bias”, “The Best Mental Math Tricks”, and “Multiply Numbers By Drawing Lines”.

This one became Simon’s favourite brain teaser. It sounds like it’s filled with irrelevant information, but somewhat counterintuitively, every little bit of information in this puzzle helps! Here is the puzzle: *A mathematician tells a census taker he has 3 children. The product of their ages is 72 and the sum of their ages is the house number. The census taker tries to figure it out but explains he still does not know. The mathematician says, “Of course not. I forgot to tell you my oldest child loves chocolate chip cookies.” Now the census taker figures it out. What are the ages of the children?*

Simon has also picked up many nifty tricks and beautiful magic squares, both from the book and from the YouTube channel.

Multiplication by drawing lines has been a huge hit, Simon has also taught this method to his sister and a friend in Amsterdam:

# Learning to See. On Machine Learning and learning in general.

December was all about computer science and machine learning. Simon endlessly watched Welch Labs fantastic but freakishly challenging series Learning to See and even showed me all the 15 episodes, patiently explaining every concept as we went along (like underfitting and overfitting, recall, precision and accuracy, bias and variance). Below is the table of contents he made of the series:

While watching the series, he also calculated the solutions to some of the problems that Welch Labs presented, like the question about the number of possible rules (= grains of sand) for a simple ML problem if memorisation is applied. His answer was that the grains of sand would cover all land on earth:

Simon loved the historical/philosophical part of the course, too. Especially the juxtaposition of memorising vs. learning, the importance of learning to make assumptions, futility of bias-free learning, and the beautiful quotes from Richard Feynman!

I have since then found another Feynman quote that fits Simon’s learning style perfectly (and I believe is the recipe to anyone’s successful learning as opposed to teaching to the test): “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” We have discussed the possibilities of continuing at the university again. I have also asked Simon how he sees himself applying his knowledge down the road, trying to understand what academic or career goals he may have set for himself, if any. Does he have a picture of himself in five years from now, where does he want to be by then? He got very upset, just like when asked to sum himself up in one sentence for an interview last spring. “Mom, I’m just having fun!”

A beautiful humbling lesson for me.

# Simon continues practicing Digital Computer Electronics

Reading the Digital Computer Electronics eBook (third edition):

# Too Many Twos Solution Proof

Simon has come up with an equation to solve the Too many Twos, the puzzle mode of the Add ‘Em Up game:

xis the number of twos I used to clear out just a single two at a time

yis the number of twos I used to clear out six twos at once.

We have two pieces of information. At the beginning, the twos are arranged in a pattern with 40 twos in it. And the number of twos I can use to clear out the whole grid is 25.

x + 6y = 40

x + y = 25

We thought we solved it, but no! The reason why is because of the way the twos are arranged there were spots where there were exactly 6 twos neighbouring an empty cell. And there was only one spot where there wee more than 6. Our equation says that there must be 3 of those. the way I solved this problem was by considering a third variable,

z= the number of twos that I place without clearing any twos in the grid. So now our two equations look like this:

x + 6y – z = 40

x + y + z = 25

With a little bit of cleverness though, we know that these are all integers. You don’t have 2.7 twos! That doesn’t exist! Which means that we can use some number theory to narrow it doen. After solving these equations we get: x = 25 – y – z and y = 3 + 2z/5

We’ve got a fraction. We need to carefully choose the

zfor this to result in an integer! This is only true ifzis divisible by 5.

I don’t want to check infinitely many solutions. Luckily, we know one more quite obvious thing: all of our variables must be positive. So if

zgets too large,xwill become negative. How large? Let’s just be lazy and use trial and error. Let’s draw a table. In our table we now only have four solutions that we need to check. The first one, with 0z‘s, clearly doesn’t work.

# Solving Logical Puzzles

The end of 2019 was packed with logic. Simon even started programming an AI that would solve logical puzzles, here is the beginning of this unfinished project (he switched to programming a chess AI instead). In the two vids below, he explains the puzzle he used as an example and outlines his plan to build the AI (the puzzles come from Brilliant.org):

And here are some impressions of Simon working on the puzzles and showing them to his sis:

# A Universal Formula for Intelligence

Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ.

Prior to a World Science Scholars live session on November 25, Simon had been asked to watch this TED talk given by a prominent computer scientist and entrepreneur, Alex Wissner-Gross, on intelligent behavior and how it arises. Upon watching the talk, Simon and I have discovered that the main idea presented by Wissner-Gross can serve as a beautiful scientific backbone to self-directed learning and explain why standardized and coercive instruction contradicts the very essence of intelligence and learning.

Alex Wissner-Gross:

What you’re seeing is probably the closest equivalent to an E = mc² for intelligence that I’ve seen. So what you’re seeing here is a statement of correspondence that intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, tau.

In short, intelligence doesn’t like to get trapped.Intelligence tries to maximizeAnd so, given this one equation, it’s natural to ask, so what can you do with this? Does it predict artificial intelligence?

future freedom of action and keep options open.

Recent research in cosmology has suggested that universes that produce more

disorder, or “entropy,” over their lifetimes should tend to have more favorable conditions for the existence of intelligent

beings such as ourselves. But what if that tentative cosmological connection between entropy and intelligence hints at a deeper relationship? What if intelligent behavior doesn’t just correlate with the production of long-term entropy, but actually emerges directly from it?

As an example, Wissner-Gross went on to demonstrate a software engine called Entropica, designed to maximize the production of long-term entropy of any system that it finds itself in. Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks, all **without being instructed to do so**. Note that Entropica **wasn’t given learning goals**, it simply decided to learn to balance a ball on a pole (just like a child decides to stand upright), decided to use “tools”, decided to apply cooperatvive ability in a model experiment (just like animals sometimes pull two cords simultaneously to release food), taught itself to play games, network orchestration (keeping up connections in a network), solve logistical problems with the use of a map. Finally, Entropica spontaneously discovered and executed a buy-low, sell-high strategy on a simulated range traded stock, successfully growing assets. It learned risk management.

The urge to take control of all possible futures is a more fundamental principle than that of intelligence, that general intelligence may in fact emerge directly from this sort of control-grabbing, rather than vice versa.

In other words, if you give the agent control, it becomes more intelligent.

“How does it seek goals? How does the ability to seek goals follow from this sort of framework? And the answer is, the ability to seek goals will follow directly from this in the following sense: just like you would travel through a tunnel, a bottleneck in your future path space, in order to achieve many other diverse objectives later on, or just like you would invest in a financial security, reducing your short-term liquidity in order to increase your wealth over the long term, goal seeking emerges directly from a long-term drive to increase future freedom of action”.

The main concept we can pass on to the new generation to help them build artificial intelligences or to help them understand human intelligence, according to Alex Wissner-Gross is the following: **“Intelligence should be viewed as a physical process that tries to maximize future freedom of action and avoid constraints in its own future. Intelligence is a physical process that resists future confinement”. **

Simon’s reaction to Alex Wissner-Gross’s TED Talk was: “But this means school only makes you less intelligent!” (in the sense that school reduces your chances at seeking goals yourself, introduces constraints on your future development).

During the actual live session, neuroscientist Suzana Herculano-Houzel, famous for inventing a method to count the exact number of neurones in the human brain and comparative studies of various species, defined intelligence as **behavioral and cognitive flexibility.** Flexibility as **a choice to do something else than what would happen inevitably, no longer being limited to purely responding to stimuli.** Flexibility in decisions that allow you to stay flexible. Generically speaking, **the more flexibility the more intelligence.**

Animals with a cerebral cortex gained a past and a future, Professor Herculano-Houzel explained. Learning is one of the results of flexible cognition. Here learning is understood as solving problems. **Hence making predictions and decisions is all about maximizing future flexibility, which in turn allows for more intelligence and learning. **This is very important guideline for educational administrations, governments and policy makers: allowing for flexibility. There is a problem with defining intelligence as producing desired outcomes, Herculano-Houzel pointed out while answering one of the questions from students.

Replying Simon’s question about whether we can measure intelligence in any way and what the future of intelligence tests could be like, Professor Herculano-Houzel said she really liked Simon’s definition of IQ testing as a “glorified dimensionality reduction”. Simon doesn’t believe anything multidimensional fits on a bell curve and can possibly have a normal distribution.

Professor Herculano-Houzel’s answer:

Reducing a world of capacities and abilities into one number, you can ask “What does that number mean?” I think you’d find it interesting to read about the history of the IQ test, how it was developed and what for, and how it got coopted, distorted into something else entirely. It’s a whole other story. To answer your question directly, can we measure intelligence? First of all, do you have a definition for intelligence? Which is why I’m interested in pursuing this new definition of intelligence as flexibility. If that is an operational definition, then yes, we can measure flexibility. How do we measure flexibility?

Professor went on to demonstrate several videos of researches giving lemurs and dogs pieces of food partially covered by a plastic cylinder. The animals would have to figure it out on their own how to get to the treat.

You see, the animal is not very flexible, trying again and again, acting exactly as before. And the dog that has figured it out already made its behavior flexible. It can be measured how long it takes for an animal to figure out that it has to be flexible, which you could call problem solving. Yes, I think there are ways to measure that and it all begins with a clear definition of what you want to measure.

As a side note, Professor Herculano-Houzel also mentioned in her course and in her live session that she had discovered that a higher number of neurons in different species was correlated with **longevity**. Gaining flexibility and a longer life, it’s like having the cake and eating it! We are only starting to explore defining intelligence, and it’s clear that the biophysical capability (how many neurons one has) is only a starting point. It is through our experiences of the world that we gain our ability and flexibility, that is what learning is all about, Professor concluded.

# Nash Equilibrium

Simon explaining the Nash Equilibrium with a little game in p5.js. Play it yourself at: https://editor.p5js.org/simontiger/sketches/lfP4dKGCs

Inspired by TedEd video *Why do competitors open their stores next to one another? * by Jac de Haan.