# Simon Builds a Chess AI with Minimax

I’ve been terrible at keeping this blog up to date. One of Simon’s best project in December 2019 was creating a chess robot and I haven’t even shared it here.

We were joking how this is Simon’s baby and her name is Chessy. Simon also made an improved version with a drop-down menu allowing to choose 1 to 5 steps ahead difficulty level (warning: levels 4 and 5 may run quite slowly): https://chess-ai-user-friendly–simontiger.repl.co/

Simon’s original 2-steps-ahead game: https://chess-ai–simontiger.repl.co/ Code: https://repl.it/@simontiger/Chess-AI

While researching how to apply the Minimax algorithm, Simon has relied on Sebastian Lague’s Algorithms Explained – minimax and alpha-beta pruning; Keith Galli’s How does a Board Game AI Work? (Connect 4, Othello, Chess, Checkers) – Minimax Algorithm Explained; a Medium article on Programming a Chess AI: A step-by-step guide to building a simple chess AI by Lauri Hartikka; of course, The Coding Train’s challenge Tic Tac Toe AI with Minimax; and What is the Minimax Algorithm? – Artificial Intelligence by Gaurav Sen.

Simon contributed his chess robot to the MINIMAX coding challenge page on the Coding Train website:

And naturally we’ve had a lot of fun simply playing with Chessy as a family:

# Crack Simulation in p5.js

Link to the interactive project and the code: https://editor.p5js.org/simontiger/sketches/n6-WZhMC3

Simon built a simple cellular automaton (rule 22) model for fracture. He read about this model a couple nights before in Stephen Wolfram’s “A New Kind of Science” and recreated it from memory.

Stephen Wolfram: “Even though no randomness is inserted from outside, the paths of the cracks that emerge from this model appear to a large extent random. There is some evidence from physical experiments that dislocations around cracks can form patterns that look similar to the grey and white backgrounds above” (p.375).

# A Universal Formula for Intelligence

Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ.

Prior to a World Science Scholars live session on November 25, Simon had been asked to watch this TED talk given by a prominent computer scientist and entrepreneur, Alex Wissner-Gross, on intelligent behavior and how it arises. Upon watching the talk, Simon and I have discovered that the main idea presented by Wissner-Gross can serve as a beautiful scientific backbone to self-directed learning and explain why standardized and coercive instruction contradicts the very essence of intelligence and learning.

Alex Wissner-Gross:

What you’re seeing is probably the closest equivalent to an E = mc² for intelligence that I’ve seen. So what you’re seeing here is a statement of correspondence that intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, tau. In short, intelligence doesn’t like to get trapped. Intelligence tries to maximize
future freedom of action and keep options open.
And so, given this one equation, it’s natural to ask, so what can you do with this? Does it predict artificial intelligence?

Recent research in cosmology has suggested that universes that produce more disorder, or “entropy,” over their lifetimes should tend to have more favorable conditions for the existence of intelligent
beings such as ourselves. But what if that tentative cosmological connection between entropy and intelligence hints at a deeper relationship? What if intelligent behavior doesn’t just correlate with the production of long-term entropy, but actually emerges directly from it?

As an example, Wissner-Gross went on to demonstrate a software engine called Entropica, designed to maximize the production of long-term entropy of any system that it finds itself in. Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks, all without being instructed to do so. Note that Entropica wasn’t given learning goals, it simply decided to learn to balance a ball on a pole (just like a child decides to stand upright), decided to use “tools”, decided to apply cooperatvive ability in a model experiment (just like animals sometimes pull two cords simultaneously to release food), taught itself to play games, network orchestration (keeping up connections in a network), solve logistical problems with the use of a map. Finally, Entropica spontaneously discovered and executed a buy-low, sell-high strategy on a simulated range traded stock, successfully growing assets. It learned risk management.

The urge to take control of all possible futures is a more fundamental principle than that of intelligence, that general intelligence may in fact emerge directly from this sort of control-grabbing, rather than vice versa.

In other words, if you give the agent control, it becomes more intelligent.

“How does it seek goals? How does the ability to seek goals follow from this sort of framework? And the answer is, the ability to seek goals will follow directly from this in the following sense: just like you would travel through a tunnel, a bottleneck in your future path space, in order to achieve many other diverse objectives later on, or just like you would invest in a financial security, reducing your short-term liquidity in order to increase your wealth over the long term, goal seeking emerges directly from a long-term drive to increase future freedom of action”.

The main concept we can pass on to the new generation to help them build artificial intelligences or to help them understand human intelligence, according to Alex Wissner-Gross is the following: “Intelligence should be viewed as a physical process that tries to maximize future freedom of action and avoid constraints in its own future. Intelligence is a physical process that resists future confinement”.

Simon’s reaction to Alex Wissner-Gross’s TED Talk was: “But this means school only makes you less intelligent!” (in the sense that school reduces your chances at seeking goals yourself, introduces constraints on your future development).

During the actual live session, neuroscientist Suzana Herculano-Houzel, famous for inventing a method to count the exact number of neurones in the human brain and comparative studies of various species, defined intelligence as behavioral and cognitive flexibility. Flexibility as a choice to do something else than what would happen inevitably, no longer being limited to purely responding to stimuli. Flexibility in decisions that allow you to stay flexible. Generically speaking, the more flexibility the more intelligence.

Animals with a cerebral cortex gained a past and a future, Professor Herculano-Houzel explained. Learning is one of the results of flexible cognition. Here learning is understood as solving problems. Hence making predictions and decisions is all about maximizing future flexibility, which in turn allows for more intelligence and learning. This is very important guideline for educational administrations, governments and policy makers: allowing for flexibility. There is a problem with defining intelligence as producing desired outcomes, Herculano-Houzel pointed out while answering one of the questions from students.

Replying Simon’s question about whether we can measure intelligence in any way and what the future of intelligence tests could be like, Professor Herculano-Houzel said she really liked Simon’s definition of IQ testing as a “glorified dimensionality reduction”. Simon doesn’t believe anything multidimensional fits on a bell curve and can possibly have a normal distribution.

Reducing a world of capacities and abilities into one number, you can ask “What does that number mean?” I think you’d find it interesting to read about the history of the IQ test, how it was developed and what for, and how it got coopted, distorted into something else entirely. It’s a whole other story. To answer your question directly, can we measure intelligence? First of all, do you have a definition for intelligence? Which is why I’m interested in pursuing this new definition of intelligence as flexibility. If that is an operational definition, then yes, we can measure flexibility. How do we measure flexibility?

Professor went on to demonstrate several videos of researches giving lemurs and dogs pieces of food partially covered by a plastic cylinder. The animals would have to figure it out on their own how to get to the treat.

You see, the animal is not very flexible, trying again and again, acting exactly as before. And the dog that has figured it out already made its behavior flexible. It can be measured how long it takes for an animal to figure out that it has to be flexible, which you could call problem solving. Yes, I think there are ways to measure that and it all begins with a clear definition of what you want to measure.

As a side note, Professor Herculano-Houzel also mentioned in her course and in her live session that she had discovered that a higher number of neurons in different species was correlated with longevity. Gaining flexibility and a longer life, it’s like having the cake and eating it! We are only starting to explore defining intelligence, and it’s clear that the biophysical capability (how many neurons one has) is only a starting point. It is through our experiences of the world that we gain our ability and flexibility, that is what learning is all about, Professor concluded.

# Simon found a sentence in Stephen Wolfram’s book that sums it all up!

“When the overall behavior is complex, it becomes impossible to characterize it in any complete way by just a few numbers”, Stephen Wolfram writes in A New Kind of Science. Simon: “This is like the essence of my life!”

# Simon crafting a search engine with sticky notes

Simon working on a simplified version of a search engine, including just a few documents, and performing calculations to determine how many searches one should do to make creating an index of all the documents efficient (something he has picked up in Brilliant.org’s Computer Science course.

# World Science Scholars Feature Simon’s visit to CERN in a newsletter. The current course is about neurons. Reading Stephen Wolfram.

Simon’s September visit to CERN has been featured in a World Science Scholars newsletter:

Here’s our update on the World Science Scholars program. Simon has finished the first bootcamp course on the theory and quantum mechanics by one of program’s founders, string theorist Professor Brian Greene and has taken part in three live sessions: with Professor Brian Greene, Professor Justin Khoury (dark matter research, alternatives to the inflationary paradigm, such as the Ekpyrotic Universe), and Professor Barry Barish (one of the leading experts in gravitational waves and particle detectors; won the Nobel Prize in Physics along with Rainer Weiss and Kip Thorne “for decisive contributions to the LIGO detector and the observation of gravitational waves”).

At the moment, there isn’t much going on. Simon is following the second course offered by the program, at his own pace. It’s a course about neurology and neurological statistics by Professor Suzana Herculano-Houzel and is called “Big Brains, Small Brains: The Conundrum of Comparing Brains and Intelligence”. The course is compiled from Professor Herculano-Houzel’s presentations made at the World Science Festival so it doesn’t seem to have been recorded specifically for the scholars, like Professor Brian Greene’s course was.

Professor Herculano-Houzel has made “brain soup” (also called “isotropic fractionator”) out of dozens of animal species and has counted exactly how many neurons different brains are made of. Contrary to what Simon saw in Professor Greene’s course (mainly already familiar stuff as both relativity theory and quantum mechanics have been within his area of interest for quite some time), most of the material in this second course is very new to him. And possibly also less exciting. Although what helps is the mathematical way in which the data is presented. After all, the World Science Scholars program is about interdisciplinary themes that are intertwined with mathematical thinking.

Another mathematical example: in Professor Herculano-Houzel’s course on brains we have witnessed nested patterns, as if they escaped from Stephen Wolfram’s book we’re reading now.

Simon has also contributed to the discussion pages, trying out an experiment where paper surface represented cerebral cortex:

Simon: “Humans are not outliers because they’re outliers, they are outliers because there’s a hidden variable”.

Simon is looking forward to Stephen Wolfram’s course (that he is recording for world science scholars) and, of course, to the live sessions with him. The information that Stephen Wolfram will be the next lecturer has stimulated Simon to dive deep into his writings (we are already nearly 400 pages through his “bible” A New Kind of Science) and sparked a renewed and more profound understanding of cellular automata and Turing machines and of ways to connect those to our observations in nature. I’m pretty sure this is just the beginning.

It’s amazing to observe how quickly Simon grasps the concepts described in A New Kind of Science; on several occasions he has tried to recreate the examples he read about the night before.

# Zutopedia, a fun Computer Science Resource

Through the whole moth of October, Simon really loved watching Computer Science and Physics videos by Udi Aharoni, a researcher at IBM research labs and creator of the Udiprod channel https://www.youtube.com/user/udiprod and the Zutopedia website http://www.zutopedia.com/ Simon’s favourite has been the Halting Problem video that he also explained to his little sister.

In the example below, Simon has applied a compression algorithm to a sentence by transforming the sentence into a tree where all the letters have their corresponding frequencies in this sentence. “Can you get back to the sentence? You have to first transform the letters into ones and zeros using the tree (the tree is a way to encode it into ones and zeros that’s better than ASCII)”.

# Brilliant Discussions

This is an example of the learning style that Simon enjoys most. He really likes doing the daily challenges on Brilliant.org. He later sometimes discusses them with other participants or even writes wikis!

# Simon’s first steps in Stephen Wolfram’s Computational Universe

Simon has been enjoying Stephen Wolfram’s huge volume called A New Kind of Science and is generally growingly fascinated with Wolfram’s visionary ideas about the computational universe. We have been reading the 1500-page A New Kind of Science every night for several weeks now, Simon voraciously soaking up the behaviour of hundreds of simple programs like cellular automata.

Wolfram’s main message is that, contrary to our intuition, simple rules can result in complex and often seemingly random behaviour and since humanity now has the computer as a tool to study and simulate that behaviour, it could open a beautiful new alternative to the existing models used in science. According to Wolfram, we may soon realise that the mathematical models we are currently using, based on equations and constraints instead of simple rules, are merely a historical artefact. I’m amazed at how much this is in line with Simon’s own tentative thoughts he was sharing with me earlier this year, about how maths will be taken over by computer science and how algorithms are a more powerful tool than equations. When he came up with those ideas he hadn’t discovered Wolfram’s research and philosophy yet, he used to only know Wolfram as the creator of Wolfram Mathematica and the Wolfram language, both of which Simon greatly admires for being so advanced.

Last night, Simon was watching a TED talk Stephen Wolfram gave in 2010 about the possibilities of computing the much aspired theory of everything, but not in the traditional mathematical way. “It’s about the universe!” Simon whispered to me wide-eyed, when I came to the living room to fetch him. “Mom, and you know who was in the audience there? Benoit Mandelbrot!” (Simon knows Mandelbrot died the same year, he is intrigued by the fact that his and Mandelbrot’s lifetimes have actually overlapped by one year).

We have been informed by the World Science Scholars program that Stephen Wolfram will be one of the professors preparing a course for this year’s scholars cohort, so Simon will have the unique experience of taking that course and engaging in a live session with Stephen Wolfram. It is breathtaking, a chance to connect with someone who is much older, renowned and accomplished, and at the same time so like-minded, a soulmate.

Inspired by reading Stephen Wolfram, Simon has revisited the world of cellular automata and Turing machines, and created a few beautiful Langton’s Ants:

Simon has also watched a talk by Stephen Wolfram for MIT course 6.S099: Artificial General Intelligence. He said it had things in it about Wolfram Alpha that he didn’t know yet.