In the end, he got tired of writing all the coordinates for the terrain vertices, but he did get quite far.
Applying Box2D to translate from pixels into mm:
Our Kinect adapter has finally arrived! Simon had been waiting for about one and a half months and was extremely hyper to try out the code he had already written ages ago, following Daniel Shiffman’s tutorials on Computer Vision. The code involved processing the pixels of the “depth image” and mapping depth to brightness. Simon also learned how to find the average location of a set of pixels within a minimum and maximum depth threshold (can be useful for basic hand tracking).
On Saturday Simon picked up Computer Vision again – something he had tried back in February but got stuck. This time around, he had built up better theoretical knowledge and sketched out a rough plan in advance. He has managed to complete the first two tasks from the plan, following Daniel Shiffman’s brilliant Color Tracking and Motion Detection tutorials.
Here he explains how colour tracking in computer vision works:
Simon programmed his camera to track anything red. He was careful not to wear anything red himself and tried to get the computer find the only red object within its vision – a red building block – and mark it with “a blob” (an ellipse):
Then Simon made the computer to not only track the colour and mark it with a blob, but also show all the colour pixels picked up (by changing them to white):
Simon added one more red object into the picture. The blob was now choosing the average point between the two red objects:
Simon changed the blob colour:
Motion detection. This basically means analyzing the pixels of a video to detect motion. This technique is also known as frame differencing. If an object is still, the computer shows it in white, and if an object is moving, it’s shown in black. Simon programmed this using a threshold and a distance squared formula.
– From Dan, of course! I’ve compared the two formulas for converting from 2D to 1D. (The width stands for the width of the canvas).
From 2D to 1D in Processing:
x + y * width
From 2D to 1D in p5:
x + y * width * 4
But, why did I say, “from 2D to 1D”? Because these formulas relate to the formula for converting from Processing to p5 and vice versa. How do they relate? Because of the 4 in the p5 formula. Why? Here is the formula (i is the index into the pixel array):
p5 to Processing:
i / 4
Processing to p5:
i * 4
For example, in the video I divide 365 (i) by 4 and get 91.25. Here I wrote a table of what the decimal place means:
In the example with 91.25, the decimal value is equal to 25. This is why I wrote in Processing:
color(51, 255, 51, 255)
So this is how I got the green value to Processing.
This post is really about converting from p5 to Processing and vice versa. That I figured out myself. You could say that I only explained that in this post, but really I explained that and converting from 1D to 2D. That’s why I added this form:
Converting from 1D to 2D
Converting from Processing to p5 and vice versa
Simon writing this post in html
Simon wrote this table in html
Today Simon showed me what he learned about pixel sorting in Processing (Java) by doing this coding challenge by Daniel Shiffman. Using a “selection sort” algorithm, he sorted the pixels of a sunflower image by brightness and hue. The results were amazing, it had this impressionistic effect, like a Van Gogh painting. Maybe Van Gogh also sorted pixels?
Here the pixels are sorted at random in black and white:
Here Simon added max RGB values:
Here the pixels are cloned from the image on the left:
The same, but in an easier way, by using the img.get() function:
Pixels sorted by brightness:
Pixels sorted by hue:
Beautiful, isn’t it?