Computer Science, Good Reads, Logic, Machine Learning, Notes on everyday life, Philosophy, Set the beautiful mind free

A Universal Formula for Intelligence

Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ.

Prior to a World Science Scholars live session on November 25, Simon had been asked to watch this TED talk given by a prominent computer scientist and entrepreneur, Alex Wissner-Gross, on intelligent behavior and how it arises. Upon watching the talk, Simon and I have discovered that the main idea presented by Wissner-Gross can serve as a beautiful scientific backbone to self-directed learning and explain why standardized and coercive instruction contradicts the very essence of intelligence and learning.

Alex Wissner-Gross:

What you’re seeing is probably the closest equivalent to an E = mc² for intelligence that I’ve seen. So what you’re seeing here is a statement of correspondence that intelligence is a force, F, that acts so as to maximize future freedom of action. It acts to maximize future freedom of action, or keep options open, with some strength T, with the diversity of possible accessible futures, S, up to some future time horizon, tau. In short, intelligence doesn’t like to get trapped. Intelligence tries to maximize
future freedom of action and keep options open.
And so, given this one equation, it’s natural to ask, so what can you do with this? Does it predict artificial intelligence?

Recent research in cosmology has suggested that universes that produce more disorder, or “entropy,” over their lifetimes should tend to have more favorable conditions for the existence of intelligent
beings such as ourselves. But what if that tentative cosmological connection between entropy and intelligence hints at a deeper relationship? What if intelligent behavior doesn’t just correlate with the production of long-term entropy, but actually emerges directly from it?

As an example, Wissner-Gross went on to demonstrate a software engine called Entropica, designed to maximize the production of long-term entropy of any system that it finds itself in. Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks, all without being instructed to do so. Note that Entropica wasn’t given learning goals, it simply decided to learn to balance a ball on a pole (just like a child decides to stand upright), decided to use “tools”, decided to apply cooperatvive ability in a model experiment (just like animals sometimes pull two cords simultaneously to release food), taught itself to play games, network orchestration (keeping up connections in a network), solve logistical problems with the use of a map. Finally, Entropica spontaneously discovered and executed a buy-low, sell-high strategy on a simulated range traded stock, successfully growing assets. It learned risk management.

The urge to take control of all possible futures is a more fundamental principle than that of intelligence, that general intelligence may in fact emerge directly from this sort of control-grabbing, rather than vice versa.

In other words, if you give the agent control, it becomes more intelligent.

“How does it seek goals? How does the ability to seek goals follow from this sort of framework? And the answer is, the ability to seek goals will follow directly from this in the following sense: just like you would travel through a tunnel, a bottleneck in your future path space, in order to achieve many other diverse objectives later on, or just like you would invest in a financial security, reducing your short-term liquidity in order to increase your wealth over the long term, goal seeking emerges directly from a long-term drive to increase future freedom of action”.

The main concept we can pass on to the new generation to help them build artificial intelligences or to help them understand human intelligence, according to Alex Wissner-Gross is the following: “Intelligence should be viewed as a physical process that tries to maximize future freedom of action and avoid constraints in its own future. Intelligence is a physical process that resists future confinement”.

Simon’s reaction to Alex Wissner-Gross’s TED Talk was: “But this means school only makes you less intelligent!” (in the sense that school reduces your chances at seeking goals yourself, introduces constraints on your future development).

Simon asking his question during the live session with neuroscientist Suzana Herculano-Houzel

During the actual live session, neuroscientist Suzana Herculano-Houzel, famous for inventing a method to count the exact number of neurones in the human brain and comparative studies of various species, defined intelligence as behavioral and cognitive flexibility. Flexibility as a choice to do something else than what would happen inevitably, no longer being limited to purely responding to stimuli. Flexibility in decisions that allow you to stay flexible. Generically speaking, the more flexibility the more intelligence.

Animals with a cerebral cortex gained a past and a future, Professor Herculano-Houzel explained. Learning is one of the results of flexible cognition. Here learning is understood as solving problems. Hence making predictions and decisions is all about maximizing future flexibility, which in turn allows for more intelligence and learning. This is very important guideline for educational administrations, governments and policy makers: allowing for flexibility. There is a problem with defining intelligence as producing desired outcomes, Herculano-Houzel pointed out while answering one of the questions from students.

Replying Simon’s question about whether we can measure intelligence in any way and what the future of intelligence tests could be like, Professor Herculano-Houzel said she really liked Simon’s definition of IQ testing as a “glorified dimensionality reduction”. Simon doesn’t believe anything multidimensional fits on a bell curve and can possibly have a normal distribution.

Professor Herculano-Houzel’s answer:

Reducing a world of capacities and abilities into one number, you can ask “What does that number mean?” I think you’d find it interesting to read about the history of the IQ test, how it was developed and what for, and how it got coopted, distorted into something else entirely. It’s a whole other story. To answer your question directly, can we measure intelligence? First of all, do you have a definition for intelligence? Which is why I’m interested in pursuing this new definition of intelligence as flexibility. If that is an operational definition, then yes, we can measure flexibility. How do we measure flexibility?

Professor went on to demonstrate several videos of researches giving lemurs and dogs pieces of food partially covered by a plastic cylinder. The animals would have to figure it out on their own how to get to the treat.

You see, the animal is not very flexible, trying again and again, acting exactly as before. And the dog that has figured it out already made its behavior flexible. It can be measured how long it takes for an animal to figure out that it has to be flexible, which you could call problem solving. Yes, I think there are ways to measure that and it all begins with a clear definition of what you want to measure.

As a side note, Professor Herculano-Houzel also mentioned in her course and in her live session that she had discovered that a higher number of neurons in different species was correlated with longevity. Gaining flexibility and a longer life, it’s like having the cake and eating it! We are only starting to explore defining intelligence, and it’s clear that the biophysical capability (how many neurons one has) is only a starting point. It is through our experiences of the world that we gain our ability and flexibility, that is what learning is all about, Professor concluded.

Coding, CSS, html, JavaScript, Milestones, Murderous Maths, Physics

Simon gets serious with Linear Regression (Machine Learning)

Simon has been working on a very complicated topic for the past couple of days: Linear Regression. In essence, it is the math behind machine learning.

Simon was watching Daniel Shiffman’s tutorials on Linear Regression that form session 3 of his Spring 2017 ITP “Intelligence and Learning” course (ITP stands for Interactive Telecommunications Program and is a graduate programme at NYU’s Tisch School of the Arts).

Daniel Shiffman’s current weekly live streams are also largely devoted to neural networks, so in a way, Simon has been preoccupied with related stuff for weeks now. This time around, however, he decided to make his own versions of Daniel Shiffman’s lectures (a whole Linear Regression playlist), has been busy with in-camera editing, and has written a resume of one of the Linear Regression tutorials (he actually sat there transcribing what Daniel said) in the form of an interactive webpage! This Linear Regression webpage is online at: https://simon-tiger.github.io/linear-regression/ and the Gragient Descent addendum Simon made later is at:  https://simon-tiger.github.io/linear-regression/gradient_descent/interactive/ and https://simon-tiger.github.io/linear-regression/gradient_descent/random/

And here come the videos from Simon’s Liner Regression playlist, the first one being an older video you may have already seen:

Here Simon shows his interactive Linear Regression webpage:

A lecture of Anscombe’s Quartet (something from statistics):

Then comes a lecture on Scatter Plot and Residual Plot, as well as combining Residual Plot with Anscombe’s Quartet, based upon video 3.3 of Intelligence and Learning. Simon made a mistake graphing he residual plot but corrected himself in an addendum (end of the video):

Polynomial Regression:

And finally, Linear Regression with Gradient Descent algorithm and how the learning works. Based upon Daniel Shiffman’s tutorial 3.4 on Intelligence and Learning:

DSC_0557

 

 

Coding, Java

Neural Networks Coding Challenge

Simon completes the Neural Networks Coding Challenge (in Processing, Java) that he had followed in the Intelligence and Learning Livestream last Friday. In the videos below he also talks about what neural networks are and tries to add a line object (something he had suggested in the live chat).

 

Biology, Coding, Java, JavaScript, Milestones, Murderous Maths, Simon's Own Code, Space

Simulating Evolution: Evolutionary Steering Behaviors

 

On Wednesday Simon went on with playing god (evolution simulation) and translated Daniel Shiffman’s Evolutionary Steering Behaviors Coding Challenge from JavaScript to Java.  The goal of the challenge is to create a system where autonomous steering agents (smart rockets) evolve the behavior of eating food (green dots) and avoiding poison (red dots).

This challenge is part of the spring 2017 “Intelligence and Learning” course at NYU’s Tisch School of the Arts Interactive Telecommunications Program. Simon was especially happy to find out that Daniel Shiffman left a couple of personal comments praising Simon’s progress and offering help in pushing his code to Danniel’s GitHub repo.

Here is Simon’s translation on GitHub: https://github.com/simon-tiger/steering-behaviors-evolution

The rockets have their own DNA consisting of four genes:

IMG_4531

The challenge step by step: