This is Simon explaining Diffe-Hellman key exchange (also called DiffeHellman protocol). He first explained the algorithm mixing watercolours (a color representing a key/ number) and then mathematically. The algorithm allows two parties (marked “you” and “your friend” in Simon’s diagram) with no prior knowledge of each other to establish a shared secret key over an insecure channel (a public area or an “eavesdropper”). This key can then be used to encrypt subsequent communications using a symmetric keycipher. Simon calls it “a neat algorithm”). Later the same night, he also gave me a lecture on a similar but more complicated algorithm called the RSA. Simon first learned about this on Computerphile and then also saw a video about the topic on MajorPrep. And here is another MajorPrep video on modular arithmetic.
One more blog post with impressions from our vacation at the Cote d’Azur in France. Don’t even think of bringing Simon to the beach or the swimming pool without a sketchbook to do some math or computer science!
In reaction to Yuval Noah Harari’s book Homo Deus (the part about humans evolving to break out of the organic realm and possibly breaking out of planet Earth):
When you cross the street there’s always a risk that an accident will happen that has a non-zero probability. If you live infinitely long, anything that has a non-zero probability can happen infinitely many times in your life. For example, if the event we are talking about is an accident, the first time it will happen in your life, you’re already dead. So when you cross the street and want to live infinitely long there’s a risk that an accident will happen and you die. So we come to the conclusion, that if you want to live infinitely long it’s not worth crossing the street. But there’s always a risk that you die, so if you live infinitely long, it’s not actually worth living. So we’ve got a little bit of a problem here. Unless you come to the more extreme idea of detaching yourself from the physical world all together. And I’m not talking about the sort of thing that you don’t have a body, but somehow still exist in the physical world. I mean literally that you live in a pure mathematical world. Because in mathematics, you can have things that have zero probability of happening. You can have something definitely happening and you can also have something that is definitely not happening.
However, there’s another thing. How does mathematics actually work? There are these things called axioms and it’s sort of built up from that. What if we even do away from those axioms? Then we can actually do anything in that mathematical world. And what I mean by anything is really anything that you can from any set of axioms that you can come up with. There’s a little bit of a problem with that, you can come to contradictions, it’s a little bit risky. We are really talking about the ultimate multiverse, we’re talking about quite controversial stuff here. The only way anyone can come up with this is by pushing to the extremes.
Parts 1 and 2 in Simon’s new series showing him attempting to build an 8-bit computer from scratch, using the materials from Ben Eater’s Complete 8-bit breadboard computer kit bundle.
Simon is learning this from Ben Eater’s playlist about how to build an 8-bit computer.
It’s all Ben Eater‘s fault! Simon is more of a software and pure math champion, but Ben Eater’s videos have sparked Simon’s interest in logic and electronics, anew. Back in mid July (yes, I know, I’m a little behind with the blog), while waiting for his Complete 8-bit breadboard computer kit bundle to arrive from the US, Simon was playing with virtual circuits that he built on two wonderful platforms: Circuitverse.org and Logic.ly. You can view Simon’s page on Circuitverse at https://circuitverse.org/users/7241
Simon’s favourite was building the Master-Slave JK Flip-Flop https://circuitverse.org/simulator/edit/20037
Simon gave me a whole lecture on the differences between Sequential and Combinational Logic: in the former, there’s a presence of a feedback loop (the output actually goes back to somewhere else in the circuit), and the latter has everything going in one direction (the inputs come in and the outputs go out).
It’s a little bit like the difference between a Feed Forward neural network where the output only depends on the input and a recurrent neural network where the output also depends on what the output was previously,
Here’s a problem with sequential logic circuits: they go crazy like this very often (confused NOR gate). That’s why most sequential logic circuits have a clock in them. A clock acts like a delay so that it won’t go crazy.
That’s the power of sequential logic: you can have the same input but a different output. This is useful for storing data: I release the input, but the data is stored. It can only be archived in sequential logic.
The delay comes in error detection (on the rising edge of the square wave).
The following circuits are buit in Logicly https://logic.ly/demo
Walking home from the swimming pool (where he and Neva had been jumping into the water exactly 24 times, calling out all the permutations of 1,2,3 and 4), Simon suddenly stopped to tell me that some day, mathematics may become engulfed by computer science. Apparently, this was what he was thinking about the whole time he kept silent on the way. Once we got home I sat down to listen to the elaborate proof he had coined for his hypothesis. Here is comes, in his own words:
Someday mathematics may become computer science because most of mathematics uses simple equations and stuff like that, but computer science uses algorithms instead. And of course, algorithms are more powerful than equations. Let me just give you an example.
There’s this set of numbers called algebraic numbers, and there’s this set of numbers called computable numbers. The algebraic numbers are everything you can make with simple equations (finite polynomials), so not like trig numbers, which are actually infinite polynomials, just simple finite equations with arithmetic and power. Computable numbers, however, are a set of numbers that you can actually make with a finite algorithm. It may not represent a finite equation, but the rules for the equation have to be finite. So the algorithm that generates that equation has to be finite. It’s pretty easy to see that every algebraic number is by definition computable. Because the algorithm would just basically be the equation itself.
Is every computable number algebraic? Well, we can easily disprove that. It took very long to prove that Pi is not algebraic, that it is transcendental, as it’s called. But Pi is computable, of course, because, well, that’s how we know what Pi is, to 26 trillion decimal places. So there you go. That’s a number that is computable but not algebraic. So the Euler diagram now looks like this:
Now we look back at the beginning and we see that algebraic numbers have to do with equations and computable numbers have to do with algorithms. And because the set of all algebraic numbers is in the set of all computable numbers as we’ve just proved, the set of computable numbers will have more numbers than algebraic numbers. We have given just one example of how algorithms are more powerful than equations.
What about the mathematics that deals with numbers that are incomputable? – I asked.
Well, that’s set theory, a different branch of mathematics. I meant applied mathematics, the mathematics that has application.
Simon is looking at his subscribers count on YouTube. We speculate if he gets to 1000 before the end of the academic year. Simon tells me that’s because subscriber count is just another example of Benford’s law in action. What is Benford’s law? – I ask.
“If you take some data that spans a few orders of magnitude and take the leading digits of all numbers, then you’re most likely not going to get a uniform distribution. Instead, 30% of the time, the numbers will start with a 1, a little bit less of the time – with a 2, even less – with a 3, and so on all the way to 9 (which has a low chance of occurring). For example, the populations of countries would follow that law. If something is not random enough, though (like human height in meters), then it wouldn’t follow that law. If something is too random, it also wouldn’t follow that law.”
Simon explains further: “Consider YouTube subscriber count over time. If you have 100 subscribers, then to get up to 200 is an increase of 100% (which is pretty big). But to get from 200 to 300 is only a 50% increase. From 900 to 1000 is just 11 %.”.
Then his dad asks: “What about going from 1000 subscribers to 1100 subscribers?”
“Well, Benford’s law only cares about the leading digit (and that’s what you want to increase as well). So you don’t want to increase from 1000 to 1100, you want to increase from 1000 to 2000! In other words, start a new Benford’s law ‘Epoch’.”
When we arrived at the MathsJam last Tuesday, we heard a couple of people speak Russian. One of them turned out to be a well known Russian puzzle inventor Vladimir Krasnoukhov, who presented us with one colorful puzzle after another, seemingly producing them out of thin air. What a feast! Simon got extremely excited about several puzzles, especially one elegant three-piece figure (that turned out to have no possible solution, and that’s what Simon found particularly appealing) and a cube that required graph theory to solve it (Simon has tried solving the latter in Wolfram Mathematica after we got home, but hasn’t succeeded so far).
Vladimir told us he had been making puzzles for over 30 years and had more than 4 thousand puzzles at home. Humble and electricized with childlike enthusiasm, he explained every puzzle he gave to Simon, but without imposing questions or overbearing instructions. He didn’t even want a thank-you for all his generosity!
Vladimir also gave us two issues of the Russian kids science magazine Kvantik, with his articles published in them. One of the articles was an April fools joke about trying to construct a Penrose impossible triangle and asked to spot the step where the mistake was hidden:
Simon was very enthusiastic about trying to actually physically follow the steps, even though he realized it would get impossible at some point:
You can find out more about Vladimir Krasnoukhov’s puzzles on planetagolovolomok.ru
“Mom, how long would it take a supercomputer running at 10^15 additions per second to calculate the 1000th Fibonacci number?”
Simon has learned this problem from the new course he is following on Brilliant.org: Computer Science Algorithms. Simon worked it out on an A3 sketch book sheet and got the answer correct: it would take longer than the age of the Universe!
Simon has already finished the Computer Science Fundamentals course! It has been Simon’s idea to take up the courses on Brilliant.org again and he has been working independently, driven entirely by his intrinsic motivation.
The course has also inspired Simon to work on a very large scale project: record a series of tutorials where he explains all the best known sorting algorithms and comes up with the Python code for them on his RaspberryPi!