American Institute of Physics
SEARCH AIP
home contact us sitemap
Physics News Update
Number 574 #2, January 23, 2002 by Phil Schewe, James Riordon, and Ben Stein

The Science of Roughness

The science of roughness is how Benoit Mandelbrot describes the use of fractal mathematics to understand objects in the real world. Euclid and the ancient Greeks may have assumed that lines are smooth and one dimensional, but many typical curves in nature are tortuously indented; however, their roughness can be expressed as a fractal dimensionality, one that is greater than one but less than two.

In a trailblazing 1967 paper Mandelbrot showed, for instance, that the coastline length of Britain was anything but an "objective" constant. Instead it grew as one shrank the size of the ruler used to lay out the coast. In fact the measured coastline, and many other perimeters, grow as the inverse of the ruler size, raised to a power: perimeter=(1/R)D, where R is the ruler size and D is the fractal "dimensionality."

This power-law relationship can also typify the "time series" behavior of many phenomena, such as volcanos, earthquakes, and hurricanes. What this means is that a plot of the likelihood of an event of a certain magnitude (an earthquake's energy, say) versus the magnitude has a power-law shape. Formulating the chances of large-but-rare floods or earthquakes is obviously not merely of academic interest. Understanding the math behind large systems like a forest or a geologic fault have enormous implications for public safety and insurance underwriting.

These are some of the issues that arose at a series of talks at the recent American Geophysical Union meeting in San Francisco, where the recently formed nonlinear geophysics committee sponsored a variety of sessions on things that are "rough" in the fractal sense.

Here is a sampling of results. Bruce Malamud (King's College, London, bruce@malamud.com) and Donald Turcotte (Cornell University, turcotte@geology.cornell.edu) argued that "fractal" assessments of natural hazards are often more realistic than older statistical models in predicting rare but large disasters. He cited as an example the great Mississippi flood of 1993; a fractal-based calculation for a flood of this magnitude predicts one every 100 years or so, while the more-often-used "log-Pearson" model predicts a period of about 1500 years.

In the realm of earthquakes, John Rundle (who heads the Colorado Center for Chaos and Complexity at the University of Colorado, rundle@cires.colorado.edu, 303-492-1149) described a model in which the customary spring-loaded sliding blocks used to approximate individual faults have a more realistic built-in leeway (or "leaky thresholds," not unlike "integrate-and-fire" provisions used in the study of neural networks) for simulating the way in which faults jerk past each other. Applying these ideas to seismically active southern California, 3000 coarse-grained regions, each 10 km by 10 km (the typical size for a magnitude-6 quake), are defined. Then a coarse-grained wave function, analogous to those used in quantum field theory, is worked out for the region, and probabilities for when and where large quakes would occur are determined. Rundle claims to have good success in predicting, retroactively, the likelihood for southern-California earthquakes over the past decade and makes comparable prognostications for the coming decade. (See also Rundle et al., Physical Review Letters, 1 October 2001; and Rundle et al., PNAS, in press).

At the AGU meeting Mandelbrot himself delivered the first Lorenz Lecture, named for chaos pioneer Edward Lorenz. Mandelbrot discussed, among other things, how the process of diffusion limited aggregation (DLA) is characterized by not one but two fractal dimensions. DLA plays a key role in many natural phenomena, such as the fingering that occurs when two fluids interpenetrate. In a DLA simulation, one begins with a single seed particle. Then other particles, after undergoing a "random walk," attach themselves to the cluster. This results in a branching dendritic-like structure in which the placement of new particles is subject to the blockage of existing limbs. You can study the dimensionality of this structure by drawing a circle and counting the number of particles lying on the circle at that radius out from the original seed particle, and counting up the angular gaps between branches at that radius.

For many years studies of DLA have been confused by conflicting reports as to the underlying fractal dimensionality. Now Mandelbrot (at both IBM-914-945-1712, fractal@watson.ibm.com-and at Yale, mel@math.yale.edu), Boaz Kol, and Amnon Aharony (aharony@post.tau.ac.il, 972-3-640-8558 at the University of Tel Aviv) have shown-by employing a massive simulation involving 1000 clusters, each of 30 million particles (previous efforts had used no more than tens of thousands of particles)-that two different dimensionalities are always present, but this only becomes apparent in huge simulations. Comparing a modest (105 particles) and a large (108 particles) simulation shows that the larger cluster is not merely a scaled up version of the smaller (see figures). These results (Mandelbrot et al., 4 February 2002 issue of Physical Review Letters) are the first quantitative evidence for this type of nonlinear self-similarity.