close-circle
Close
0%
0%

Muscatupa

Multi-scale Turing pattern generator

Similar projects worth following
close
A multi-scale Turing pattern generator in C, based on Jonathan McCabe's algorithm.

There are many visual patterns that occur naturally: zebra stripes, leopard spots, tribal tattoos... Ah well, maybe not the last one. Anyway, it turns out that some of these patterns can be described mathematically and computer-simulated. Starting with a grid of pixels of random values, and applying a simple set of rules over and over again, you see unexpected patterns emerge. Maybe you know Conway's game of life, that's the same kind of idea. Right now we're interested in Turing patterns, or reaction-diffusion patterns, which have been first described by Alan Turing. We owe him a lot, don't we?

Jonathan McCabe published an algorithm to generate "Cyclic Symmetric Multi-Scale Turing Patterns". Multi-scale because the patterns exist and evolve at different levels and at different timesteps (e.g. small wobbly mazes wobbling along with big wobbly mazes), and cyclic-symmetric because there's some circular averaging involved in the process, which gives a kind of mandala feeling to the generated pictures.


His published pictures and the videos that can be created with his algorithm (see below) are amazing. Think of a Windows Media Player visualization who quit her job and started doing LSD and listening to aggrotech. McCabe was nice enough to give lots of additional details about his algorithm to Jason Rampe, who published them in this blog post. Originally, I became aware of McCabe's work thanks to that post by Frederik Vanhoutte.
Some developers have already put their implementation online, see for instance this and that. I wanted to write my own so I can really understand the algorithm and play with it to discover different kinds of patterns (hopefully). At first I tried in Python, because I'm learning it so I thought it would be a good exercise. It was, but the generation was also really, really slow. Now I'm working with C because it's my favorite language and it's much faster for this application.

The source code is on GitHub. It uses the gifsave89 library to generate gifs, static or animated. I just took that one because the API is really simple, but I might pick another library later. The documentation is generated with Doxygen. To understand the algorithm, everything that you need to know is explained in the above mentioned blog post.


As of today, the program runs and it's quite fast, so I got that going for me, which is nice. Generated animated gifs can get huge, though. And it works only with square images. See the last log about the current limitations/unresolved bugs. Here's a video that I generated.


  • Done, more or less

    Peter Dell12/06/2016 at 07:50 0 comments

    I decided to put this project on hold, even if it's a bit buggy and not as good as I want. It works decently, I already learned a lot along the way, and I want to move on to something else, so this is a good time to consider it somewhat done. And I'm not selling it or anything so you don't have a right to complain :p


    Remaining issues

    So far all the image parameters (e.g. its size or the number of patterns) are defined as constants. So all the arrays (e.g. the 2D array that stores the image) are statically allocated. But I wanted to be able to call the program from a script and pass the parameters via the command line or via a configuration file. In this case, the parameters wouldn't be know at compilation time, and all the arrays that depend on them would have to be allocated dynamically.
    I tried to implement this and there are two problems:

    1. It makes the program less clear and a lot less safe, what with the malloc's and free's everywhere. It's totally doable but I want to focus on getting the principles of multiscale Turing patterns right, not spend more time dealing with pointers to pointers to pointers.
    2. For some of the arrays, it's easy to switch to dynamic allocation. But for the image, it's more complicated. The image goes through several functions (blurring, applying the symmetries, normalizing...) that all have to be refactored to take pointers instead of 2D arrays. Again, it's no big deal in itself but my design makes it hard to change one function at a time and test it. I have to refactor the whole chain in one go. I tried to do that and I tried a couple of more subtle approaches, but it didn't work. I think my software design is to blame on that one.

    Concretely, the problems that remain are the following:

    • Only square images can be generated (see last log). I know that the problem comes from the blurring function but I wasn't able to correct it. The symmetry functions are also written for square images because I was lazy.
    • The symmetry orders different than 2 or 4 (i.e. all the complicated ones) kind of work but with artifacts. I have no clue where that comes from.
    • On some computers, with some parameters (picture too large?), the program crashes with a segfault.

    What I learned

    I made quite a lot of design mistakes at the beginning, and I realized the consequences when it was too late. I see that as a success because next time, I'll be more careful in the design phase. Incidentally, I just started reading "Code Complete" by Steve McConnell and he describes software design as a wicked problem, i.e. a problem that you have to solve once, mess up, and learn from your errors to solve it right the second time. That resonates with my experience here. Specifically, the mistakes I made:

    • The functions that make the actual pattern generation are too dependent on one another. It's hard to modify one without having to change the others as well.
    • I should have written unit tests for the important modules from the beginning.
    • Related to the last point, from the beginning I tested all my functions with square images of sizes between 100x100 and 800x800 pixels. If I hadn't restricted myself to square images, I would have quickly seen that the blurring wasn't correct, when it was easier to debug. And if I had tried to generate larger images, I would have realized that it caused segfaults. So, I should have tested more liberally and not only special cases.

    I also learned about Turing patterns, obviously, as well as doxygen and makefile. And I had to think a lot about pointers, which is always good.


    Like I said earlier, I want to move on even if the source is still buggy. On the other hand, I'm not going to let something half-baked run around on the interwebz without ever correcting it -- I have a reputation to maintain. (No? I don't? Damn.) I'm definitely going to use what I learned here in another project featuring Turing patterns. Something über-fancy. In the meantime, I generated a cool single-scale pattern to decorate my guitar, so I'll leave you...

    Read more »

  • Improving and debugging

    Peter Dell09/15/2016 at 10:18 0 comments

      There are three things I want to improve or correct:

      1. All the parameters (number of patterns, picture dimensions, etc.) should be passed by command line arguments. If no arguments are provided, then assume some default values. My plan is to make a script that is going to repeatedly call the pattern generator, automatically changing the arguments, and store the results (animations and/or pictures). This way, I could generate tons of results and see the influence of the parameters. This feature won't be hard to implement.
      2. Correct the symmetry, which works ok but the results are only perfect with symmetry orders 2 and 4. These are special cases because they split the image in square zones. For other symmetry orders, some artifact-ish patterns emerge here and there, and that's not good. So far I haven't been able to narrow the problem down.
      3. Generate rectangular images. Let's develop that.

      I've been aware for a while that the generator doesn't work if the width and height of the picture aren't equal, so I've limited the tests to square images. But now I want to know what really happens. So I'm running some tests. Here's a 300x300 picture with 3 patterns and no symmetry:

      Everything's fine. Now let's try with 400x300:

      Wooow that's a mess. There seems to be stuff repeating and overlapping... Hard to understand what's happening. But with a small difference between the width and height, we can see a bit clearer. The picture below on the left is 301x300, the right one 302x300.

      It's like the left picture is split in two zones, and the right one in three. I highlighted the splits in red to show them better:

      Turns out an image 301x300 gets split in two zones, 302x300 in three zones, 303x300 in four zones... For 400x300, there are so many zones that it's hard to recognize anything.

      At first I thought that the zones were completely separate, the patterns evolving independently within each one. I also noticed that each zone looked like a good picture but distorted. And I kept looking at those pictures, and oh-em-gee something clicked! Take the 301x300 picture and separate the two zones:

      Put the right zone to the left and vice-versa:

      See where this is going? Now apply some shear to de-distort the picture:

      The zones are not independent, it's only one picture but distorted and somehow split and wrapped around. Or is it some kind of aliasing? I still don't know what's happening there, but hopefully I'm getting closer.

  • Rotational symmetry

    Peter Dell09/10/2016 at 11:35 0 comments

      One feature of the algorithm that's been missing so far is a proper rotational symmetry. I've already implemented a point symmetry to get the same result as a 2-fold rotation; it's fast and the code is simple but not generalizable to n-fold rotations. What I find especially cool is 3-fold symmetry. Here I'll explain how I coded the general symmetry and present some results.

      According to the original algorithm, the rotational symmetry is applied to the activator and inhibitor arrays of each scale, before looking for the best variations. Each pixel/array element is averaged with its symmetrical points around the origin of the image. The number of averaged pixels depends on the symmetry order. First we'll see how the symmetry works in practice, then to which part of the image it should be applied, and finally an idea to organize all this data.

      Rotational symmetry

      For the moment, let's forget the Turing patterns, forget that we will work with data arrays and just focus on the point symmetry in a 2D coordinate system. We have a picture, assumed square, with a width 2r, and the origin at coordinates (0, 0).

      Let's also assume that we want the picture to be point-symmetrical around the origin, with an order of 4. That means that the image will be four times the same sub-image, rotated around the origin.

      There's a point at coordinates (x0, y0) which symmetrical points we want to find. If we draw a circle centered on the origin and passing by (x0, y0), the first symmetrical point is also on this circle but rotated by one fourth of a full circle, that is 2π/4. This gives the coordinates (x1, y1). The second symmetrical point (x2, y2) is found by the same method and a rotation of 2*2π/4. And the last point (x3, y3), same thing with a rotation of 3*2π/4. In the end, we have four points that are symmetrical around the origin.

      In general, for a symmetry order s, the rotations of the (x0, y0) point around the origin will be the angles

      where

      Obviously, s = 1 is a special case with no symmetry at all.

      Concretely, how to compute the coordinates of the points symmetrical to (x0, y0)? Knowing the angles θ, the rotation is done with

      Looping through the values of i, it's easy to make a list of all symmetrical points to (x0, y0). And if these two formulas are applied to all points in the appropriate portion of the image (for a symmetry order s = 4, that's the top-right quarter), we can list all the symmetries in the image.


      Application

      For the project, we have to apply these rotations to a 2D array (activator or inhibitor) and average the values of the symmetrical points. For example, let's work on the activator array act[x][y], again with s = 4:

      1. Find the coordinates (x0, y0), (x1, y1), (x2, y2), (x3, y3) thanks to the formulas above.
      2. Compute the average avg = (act[x0][y0] + act[x1][y1] + act[x2][y2] + act[x3][y3]) / 4.
      3. Replace the values in the array, i.e. act[x0][y0] = avg, act[x1][y1] = avg, act[x2][y2] = avg and act[x3][y3] = avg.
      4. Loop steps 1 to 3 over all act[x][y] with (x, y) belonging to the top-right quarter.

      Here, step 1 is done on-the-fly. This requires computing rotations each time we want to symmetrize an array. But if the symmetry order is fixed, it's more elegant to make a list of symmetrical coordinates at the beginning and just go through the list when repeating steps 2 and 3. Besides, it avoids computing cos and sin all the time, which could be computationally expensive (thanks to Jason Rampe for the idea).

      First caveats about the rotation applied to a data array: the (x, y) values are integers, but we compute them with cos and sin functions which give floating-point values. So there are lossy conversions happening, and the rotation/symmetries are flawed. This isn't too worrisome if the resolution is large enough, because the couple of pixels badly symmetrized will be lost in the sea of correct ones. But it's still important to remember that the integer operations are not perfect. In addition, we're...

    Read more »

  • It's alive! It's alive!

    Peter Dell08/31/2016 at 05:14 0 comments

      I finally understood why the patterns stabilized quickly and why the generated pictures didn't look like J. McCabe's... Turns out that I made a mistake in writing the algorithm. The program used to to this:

      1. Generate activator and inhibitor arrays.
      2. Compute variation arrays for each scale.
      3. Find the smallest variation over all scales and store the corresponding scale.
      4. Update all pixels based on this one scale.

      At one given time step, all pixels of the image were updated on one scale only. And that's not the intended behavior. Instead, the algorithm should work like that:

      1. Generate activator and inhibitor arrays.
      2. Compute variation arrays for each scale.
      3. For each pixel individually, find which scale has the smallest variation.
      4. Update each pixel based on the local best scale.

      This way, we really get multi-scale patterns. The pictures are awesome now, and keep evolving all the time. I also implemented a simple point symmetry around the origin. I'll improve that later. And finally, I started playing around with color maps to get something more fun than the black and white pictures. A three-color gradient looks pretty nice (black to dark red to white, for example).

View all 4 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates