Processing meets Box2D and blob detection
For the Programming II Workshop at our Interface Culture department I decided to do an small experiment with Box 2D. For a long time I wanted to play around with Box 2D. Especially merging real world objects with virtual object fascinates me very much. I don't like so much the common Augmented Reality stuff. However, some stuff is really cool and inspires me. Here are some projects I got my inspiration from: EdgeBomber, Laser Sound test, Phun, Crayon Physics, 2d Sketches becomes 3d Reality, ILoveSketch, MotionBeam, Tangible Fire Controlls.
Now talking about the technical stuff. In my experiment I am using the Blobscanner library and the Box2D code of Daniel Shiffman. It is really a small experiment, I just wanted to check out how difficult it is to combine camera data and virtual data. For the first test I am using a easy .jpg file with 3 rectangles. This examples works pretty well. The next map has some diagonal rectangles. The first problems appears. The upper right rectangles are drawn in the wrong direction. This is the reason why the physics simulation fails. At the moment I am using the Surface object of Box2D for drawing more complex objects. For the surface object is the direction of drawing very important. You have to draw counter clockwise. So that the normal vectors does not point inside the object (check chapter 4.4 Polygon Shapes). Even using a polygon object would make more sense instead of using the surface object...
Another issue I have, is the correct recognition of the shapes. This problem is caused by two challenges. First challenge is to order the edge points array correctly. I get only edge points and I don't know is it the right or the left side of the object. My sorting algoritm is not implemented very well. For this reason some of my recognitions fails. But for doing a fast check on simple objects to get an idea, it was enough. However, this paper about edge detection can solve my problem, or I just have to implement a "find the shortest distance algorithm". If you have some better advices, please leave a comment. Thx! The second challenge is minimizing the size of the edge points array. For this I found a nice article: Line Generalization (Smoothing, Simplifying). I ported the ActionScript code to processing and it seemed to work. Tough a better approach could be to vectorize the camera data. Nicolas Barradeau wrote two nice blog posts about vectorization v0 and vectorization v1. His code I defintely have to check out. I guess there are some hidden solutions for my problems.
So far about my experiment. My code is online on my google project site or download it. Please consider that my code is far far away from perfect. Big thanks to the great tutorial writers. I shit love the ActionScript community and the processing community 😉 Knowledge sharing, ahoi!!