PROGRAMMING PARADIGMS

Complex Systems, Fractals, and Chaos

Michael Swaine

Too bad complexity isn't a little simpler. Last month I presented several of the current views on how to manage complex systems. These were all software engineering strategies for getting organized, the idea being that we need lots of order if we hope to cope with complexity in unpredictable environments.

This month chaos gets its turn.

There is mounting evidence that the management of a complex system in an uncertain environment requires a healthy dose of chaos. Some of that evidence was presented in a recent Scientific American article, "Chaos and Fractals in Human Physiology," by Ary L. Goldberger, David R. Rigney, and Bruce J. West, (February, 1990). Goldberger et al focus on the management of one of the most complex of systems, the human body, but their conclusions are interesting for what they say about complex systems in general. There is nothing inherently biological in their mathematics.

Every operating system designer knows you need a little randomness to avoid certain problems. The Dining Philosophers problem, discussed here and more fully in David Harel's book, Algorithmics (Addison-Wesley, 1987), is a classic case of deadlock that cannot be resolved without introducing an element of randomness. Any strictly deterministic solution to the Dining Philosophers, problem is guaranteed to fail. While this sounds like what Goldberger et al are saying about chaos in human physiology, there is an important difference. Goldberger et al are not merely describing a useful random tweaking of an existing model, but throwing out an existing mathematical model and replacing it with another.

That new model is deterministic chaos, and that phrase is not, apparently, an oxymoron. (My favorite recent additions to the oxymoron lexicon are "Justice Rehnquist," thanks to John Perry Barlow, retired cattle rancher, Grateful Dead lyricist, and computer book author; and "reputable astrologer," an oxymoronic self-characterization by a notorious Reagan family advisor.)

Deterministic chaos arises from the discipline of nonlinear dynamics, which is the study of systems that respond disproportionately to stimuli. There are certain situations in which nonlinear systems that are strictly deterministic nevertheless behave in seemingly random ways. This is not true randomness, but a constrained but erratic behavior called "chaos." It is the chaos of the heart.

Times of the Heart

The conventional physiological model that covers health, disease, and aging is homeostasis. According to this model, physiological systems act to reduce variability and to maintain constant or regularly periodic internal states. Fluctuations in heart rate have been viewed as the response to external stresses, and it has been assumed that the physiological system functions to return the heart rate to its normal level. A perfectly normal heart in a perfectly stressless environment would, under this model, tick like a metronome.

The authors argue that the conventional model is wrong. Within the past five years evidence has been accumulating that chaotic behavior in physiological systems may be the product of healthy functioning, while regular, periodic behavior or steady-state behavior is symptomatic of some problem. Healthy young hearts often exhibit the greatest irregularity, while regularity of heartbeat is sometimes a precursor of heart failure.

One study the authors cite depends on examining heart rate data plotted as Fourier spectra and phase-space plots. Fourier analysis displays periodicity clearly as spikes; steady-state behavior as low, flat lines; and chaotic behavior as a broad spectrum. Phase-space plots give a different picture, but are even better at identifying chaotic behavior. In phase-space plots, periodicity shows up as clearly cyclic figures called "limit cycles," steady-state behavior maps into a point, and chaotic behavior produces something called a "strange attractor," which looks clearly chaotic.

The results they cite are impressive. A healthy heart produced a broad Fourier spectrum and a strange attractor. Unhealthy hearts showed either a Fourier spike and a limit cycle, or a nearly flat Fourier spectrum and a point attractor. Chaos, apparently, is healthy.

This chaos seems to be directed from the nervous system, where researchers are finding evidence of further chaos. Significantly, heart-rate variability decreases after a heart transplant, which requires severing a connection between the heart and the nervous system. Chaos also seems to be present in the nervous control of hormonal secretion and in other areas in the nervous system. One model shows how chaos could be produced in the nervous system: It involves feedback loops among neurons.

There's another chaotic clue in the nervous system. The branching structure of neurons seems to have fractal dimension, which is significant because of the connection between fractals and chaos.

River's Edge

Just what the relationship between chaos and fractals is is not entirely clear. But there is a connection: Fractals are often the remnant of chaotic nonlinear dynamics. The picture seems to be this: A chaotic process shapes an environment, and the trace left behind is a fractal.

A fractal, then, is a geometric form with the following distinctive characteristics: Infinite detail and self-similarity at any scale. No matter how closely you examine it, you find more detail, and it looks more or less the same at any level of magnification. A fractal can be an infinitely branching line, an infinitely lumpy surface, or any similarly hairy object of higher dimensionality.

The dimensionality of a fractal is more complicated than this, though. Because of its infinite detail, a fractal does not really have a dimension in the conventional sense. An infinitely branching line has no single measure of length, and is not a one-dimensional object. A fractal's dimension is defined to be a function of the probability that it touches any given point in the space containing the fractal. The dimension of an infinitely branching line fractal is a number between 1 and 2. This fractional dimensionality is where fractals get their name. There are some dizzying consequences of this fractional dimensionality, such as the ideas that a coastline doesn't have any definable length, and that a river has no edge.

Fractals have received a lot of attention in computer science since their discovery by Benoit Mandelbrot. But they are not just of academic interest. In cinematic computer graphics and elsewhere, fractals are proving to be powerful. If you want to model branching anatomical structures such as lungs and nerves, flora-like trees or shrubbery, or coastlines or river meanders or mountain chains or any sort of terrain, you will do well to examine fractals.

There have been a number of articles and books about fractals in nature, reminiscent of past articles and books on mathematics in nature. Mathematical functions and forms such as the spiral and helix, and the phi function and Fibonacci series, crop up with amazing frequency in natural forms. Books such as H.E. Huntley's The Divine Proportion and Theodore A. Cook's The Curves of Life, popular treatments both available in Dover paperback editions, describe how such mathematical entities appear in odd places in nature.

The Fibonacci sequence is particularly common in nature. The sequence begins 1,1 and each succeeding term is the sum of the two that precede it. The pattern of interlocking spirals on the face of a sunflower, with Fib(n) spirals twisting clockwise and Fib(n+1) spirals twisting counter clockwise, the two sets of spirals defined on the same set of florets, is particularly impressive. For sunflowers, the value of n is 8, producing interlocking spirals of 21 and 34 florets. The same mathematic structure appears in other plants: Pinecones and pineapples show the same spirals, but with a different value for n.

Intriguing as these examples of mathematics in nature are, they don't seem to offer any deep insights into nature. Apparently the number of florets in each row outward constrains the number of florets in the next concentric row in a way consistent with the simple rule for generating Fibonacci numbers. Something like that. In any case, the math may be nifty, but the underlying natural process generally turns out on examination not to be profound.

With fractals, there seems to be something deeper at work.

My Science Project

Since reading The Science of Fractal Images by Heinz-Otto Peitgen and Dietmar Saupe (Springer-Verlag, 1988), I've been playing with one particular algorithm for producing graphic forms that branch like plants. A program that I've written uses simple transformational rules to produce branching structures that look more like trees than anything I could ever produce by hand.

What I find more interesting, and what has kept me fiddling with the algorithm off and on for two years, is its apparent relevance to the process of growth. It's fascinating to watch the process of development of these fractal flora. Starting from a single shoot, the graphic develops into a twig with a couple of offshoots, then into something that looks like a b-tree in a wind-storm, finally turning into a credible sketch of a bare tree. While there exist algorithms for putting leaves on the branches, it's not the verisimilitude of the static image that impresses me, but the accuracy of the simulation of the development process.

Note: The development of the fractal, the series of stages it goes through as it increases in complexity, is strictly an artifact of the way fractals are defined. Although it is possible to pick parameters so as to create a final image resembling one plant or another, it isn't possible, so far as I can tell, to control the process of development by the choice of parameters. I've made no attempt, in any case, to mirror any kind of natural process; the program does it -- er, naturally.

Something is going on in the development of the fractal that has something deep in common with what goes on in the growth of a plant. Fractals know something about biological growth.

Perhaps it has something to do with what the Scientific American authors say about fractals: That they are often the remnant of chaotic nonlinear dynamics. Apparently the presence in an object of a static structure well modeled by a fractal is some evidence of chaotic nonlinear dynamic processes at work in the development of the object. If that's what's happening in my fractal flora, then the algorithm is doing more than drawing nice tree-like pictures. It is a fairly deep simulation of the process of growth in organisms.

I don't want to overstate the point. I don't think we're on the verge of algorithms for simulating human development that will acquire and lose their gill slits as ontogeny recapitulates phylogeny. But it is intriguing that the fractal flora simulate stages of natural growth with no prompting in the form of rigged parameters. The fact that fractals have infinite detail and self-similarity implies a lot about how they develop, and in fact allows a very simple initial rule to apply at successively more complex stages of development, just as the rules for organic development must apply to the early stages of organ development and also to the later stages in which organs interact in complex systems.

Fractals and DNA appear to have similar problems to solve. Is it possible that they solve them in similar ways?

Field of Dreams

One of the dreams of science fiction, and consequently of the artificial intelligence community that reads science fiction stories for research topics, has always been the system that programs itself. The machine that actively seeks out knowledge in order to grow more wise. The vague notions of how this might come about seem always to rest on faith in critical mass. Even Douglas Hofstadter's vision of artificial consciousness assumes that sufficient complexity somehow magically transforms a system into an intelligence.

Critical mass probably isn't enough; natural systems need a plan of development, the genetic blueprint. It seems reasonable to expect that highly complex and adaptive artificial systems would need some plan, too. The current Most Likely to Succeed paradigm for machine learning, or adaptive systems, is neural nets. Currently, neural nets are designed with as much naivete regarding neurophysiology as regarding neuroanatomy. There is little reason to think that exposing a blank slate neural net to unpredictable events will lead it somehow to cope with its environment. If neural nets are to grow more complex in useful ways, don't we need to build in a plan for recognizing what is useful, and shouldn't it be a plan that can admit of more sophisticated interpretation as the system gets more sophisticated?

Do artificial neural nets, such as real networks of neurons, need a dose of chaos?

More Details.

Fractal Flora

Here's a sketch of the fractal program I've written. It doesn't merit a pseudocode description, because the underlying algorithm is not efficient. My purposes had as much to do with teaching HyperTalk as with exploring fractals, so I implemented it so as to keep the concepts visible, employ simple user-comprehensible graphic tools, and use only HyperTalk code. A serious exploration of fractals would have to abandon all of these constraints.

The program uses turtle graphics, which is to say that the user describes the figures to be drawn in terms of strings of one-character commands, which specify the direction and movement of an imaginary drawing turtle. (When you've been editor of a magazine originally called Dr. Dobb's Journal of Tiny BASIC Calisthenics and Orthodontia, you learn how to write things like that with a straight face.) The program draws the fractals by passing the turtle graphic commands to a simple turtle graphic engine.

The program recognizes these turtle graphic commands:

Drawing: F Forward 1 unit, pen down; U Forward 1 unit, pen up

Orientation: <Turn one unit left;>Turn one unit right

Context: [Store current turtle position & direction;] Reset turtle to
previous position & direction

Fractals start from a simple base figure, and are transformed to new levels of complexity via transformation rules. The program steps through these levels, transforming the current string of turtle graphic characters into a new string. It draws the current version of the fractal from the turtle graphic string at each level. The "true" fractal requires infinite levels of detail, so the program is only drawing successive approximations. Figure 1 shows a typical image generated by the program.

The program starts by prompting the user for parameters. The user must give the fractal a title, a base string of turtle graphic commands (the single command F is typical), a repeatMode, a unitAngle, and a set of rules for transforming the base string to produce higher levels of the fractal. RepeatMode controls how the program steps through the levels, and unitAngle (0-360) controls how sharply the turtle turns.

A typical transformation rule is F -> F[>F][<F]. This turns each straight line at level n into a fork consisting of one step forward and branches to the left and right. The user enters these transformation rules in response to specific prompts. Each such prompt shows a turtle graphic character and asks what it is to be transformed into. Initially, these characters are just the characters in the base string, but as new rules are added, they may add new characters, requiring new rules. Rules not involving commands [and] will produce figures such as coastlines and mountain ridges; using [and] will produce branching structures such as trees and blood vessels.

Perhaps the main point of describing this process is to show how inefficient fractals are. Because each transformation is a function, one could, I suppose, decide how many levels deep you want to go and compute the composition of the functions needed, applying this function. For me, computing and drawing each successive level is important because it is the process of fractal growth that I'm interested in. -- M.S.