Canopy of Catastrophes
Random Walks with the Unhuman
Bhob Rainey
The following concerns the technical and conceptual backbone of a piece of music that appears on From null lands led, starrily, an album released in October 2017 on Anòmia. Much of the technical discussion revolves around sonification, which many readers will already know is a technique of mapping “unsounded” data (changes in temperature, the relative income of adjacent neighborhoods, mortality rates, etc.) onto sonic properties (pitch, volume, timbre, etc.). A subset of readers will likely hold one of two opposing views on sonification: 1) that it is a near-magical insight into the subject being sonified; or, 2) that it is a gimmick employed by lazy artists and researchers, usually of the grant-seeking variety. Headlines in the form of “Listen to the Sound of the S&P 500!” expose the first view while doing little to assuage the second. Recognizing that both views overstate the scope of either sonification itself or its purveyors’ intentions, a third voice often appears, telling us, “Sonification is just a tool among others. It’s all in how you use it.” This reasonable voice would prefer that the subject of tools and their effect on outcomes doesn’t emerge to further derail whatever discussion it has sought to put to rest.
A tool as potentially rich as sonification tends to leave its traces in the work it is attached to, especially if the subject being sonified is particularly beguiling and idiosyncratic. Yet, as with other tools and techniques involved in art-making (improvisation, serialism, personal confession, etc.), it is insufficient on its own for generating something as charged and long-lasting as an aesthetic effect. What follows is an attempt to give the tool its due while framing it in a broader conceptual field, acknowledging that the ideas and actions described were at the service of an aesthetic outcome that is more implicit than explicit in the text.
Undoing
Canopy of Catastrophes is part of a series of pieces exploring a thought space that you might call unhuman (as Dylan Trigg does when he attempts to rescue phenomenology from speculative realism). Not to be confused with “inhuman”, the unhuman is neither (intentionally) cruel nor completely distinct from the human. A thorough definition of the concept is better sought in the work of Trigg and Eugene Thacker, but it could be summed up as something like the repressed knowledge of the non-human history of the matter that both comprises and precedes us — a history that appears more alien and jarringly present the more you explore it. A human, the thinking goes, is a subset of the unhuman — the part that can be structured and absorbed by consciousness. Numerous effects emerge when we try to think into or through the part that hasn’t been absorbed, among them a suspicion that perception, experience, and even thought itself are neither possessed by nor intended for us. Give that suspicion some leeway, and we might undo some semi-functional fictions that currently present as fact. Maybe that undoing is revelation, maybe it is horror.
With these maybes in mind, you could say that Canopy of Catastrophes is in the genre of “How I stopped worrying and learned to love the unfathomable terror at the heart of things”. It is built around cosmic precarity and celestial turmoil, countered by an unreasoned optimism that is rooted in randomness. Topics like these, even as they tend towards extravagance, have an uncanny way of appearing in numerical data and mathematical models, and the bulk of this writing will describe the use of certain data sets and models in constructing this piece. I would like to caution, however, that one probably doesn’t “hear” the data underlying the music, nor is the music entirely reliant on that data. Sonification has many possible applications, and here it was a way for me to explore the terrain of certain events that happen in spite of our existence, yet still potentially affect that existence in significant ways. The attitude of that exploration (modulated by the process of exploring) is probably more audible than the underlying data.
Close Approaches
NASA maintains a web site that allows us to view data from its Near Earth Object Program. Near earth objects are “comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth’s neighborhood.” Having some asteroids nearby is great for studying the formation of the solar system, but, recalling the fate of the dinosaurs, some people might not be too comfortable having them in our “neighborhood”. To amplify that discomfort, NASA provides a subset of data under the heading “Close Approaches”. And if, under this heading, one expects to find solace in a short list of remotely historical near misses, one will be disappointed to see tens of thousands of entries from only the past hundred years.
Personally, I can’t be reminded enough that we are living on a cosmic object among infinite others that happens to have miraculously had the ideal conditions to generate and house our bizarre species. And I take comfort in the idea that the terrestrial and cosmological environments that nurture us do so without care for our existence; that they will erase us with the same dispassion when things really go to shit. So, the idea that epoch-ending rocks and ice balls are hurling around the neighborhood with a startling regularity is appealing to me — they continually fail to hit us (miraculous conditions), but if one happens to succeed splendidly, nobody will be around to care (cosmic dispassion).
And so there exists a data set, largely numerical, that speaks to one of my metaphysical fetishes. Numerical data is easily cast to sonic parameters — a velocity becomes volume, a distance becomes pitch, a size becomes modulation rate, etc. Computation makes it relatively simple to experiment with these relationships on large data sets (the primary language I use for this is SuperCollider, though there are cases here where Javascript was useful for converting web-based data into a usable format). So, given that the Close Approaches data is conceptually appealing and sufficiently rich, it made sense for me to test its musical potential with a few algorithmic arrangements.
Here is a sample of Close Approaches data:
Every row is a close approach. The time of the approach is logged, as well as various qualities of the object: distance, velocity, magnitude, and N-sigma (“The factor by which the target-plane error ellipse must be scaled to allow for an Earth grazing impact”) [Note: NASA no longer provides N-sigma data in their web interface, hence its absence from the example data]. Time, being generally applicable to rhythm and form, can be a useful bit of data in a musical context, especially in this case, where the frequency of close approaches and their distribution over time is of primary interest. A reasonable first step emerges: converting the time between approaches into periods short enough to explore a large amount of objects over a relatively short period of time.
A date, in the world of computers, is often expressed as a timestamp, which is an integer representation of that date, including hours, minutes, and seconds, relative to the “Unix epoch” (January 1st, 1970). Thus, August 30th, 2016, 6:36pm, becomes 1472582171. Dates before the Unix epoch are represented by negative numbers. Converting dates to timestamps makes it easy to calculate the number of seconds between two events — subtract one timestamp from the other — and therefor to create a rhythmic representation of data. Of course, while the time between close approaches may be alarmingly short from the perspective of an earthling, it is still quite long for a musical piece that attempts to encompass a large number of events. The idea then, is to preserve the relative gap between events by dividing each gap by a fixed number — say, 10,000 — so that a gap of 1000 seconds is reduced to 0.1 second. Emitting sounds separated by these gaps might generate clusters with varying density and, potentially, an occasional silence.
To test the rhythmic interest of Close Approaches, I wrote a program to go through the data one entry at a time, playing an arbitrary sound after waiting the calculated number of seconds. Sounds piled on top of one another, opened into silence, suddenly collapsed, and exhibited periods of extreme density followed by relative calm. A signature emerged of near-unpredictability tempered by a feeling that there was a deliberate, expressive force at work. The sense that a thing not quite in reach is expressing something basically inscrutable is a sign that the unhuman might be whispering nearby, and I was listening.
Having uncovered a compelling architecture of events, I began looking for sounds that suited and enhanced those architectural qualities. The process of developing those sounds, involving additional layers of sonification, is beyond the scope of this writing. In the end, I proceeded with a family of eighteen complex sounds created on the Serge synthesizer at EMS in Stockholm — sounds which shared certain characteristics with each other while varying in frequency and degree of “aggression”. Using data points in the Close Approaches set, I altered duration, playback speed, and other expressive components. Moving through subsets of the sound collection, I was able to create a sense of passage through smooth and rough timbres, placid and forceful intensities.
All of these actions brought musical properties into the picture, but weaknesses emerged. Despite short-term unpredictability, the rhythm began to feel too regular over time — occasional tempo changes seemed necessary. Also, there wasn’t a clearly defined reason to change from one subset of sounds to another. I was uncomfortable with this level of arbitrariness. I found solutions while introducing another data set.
Disasters
Although, at the time of this writing, none of us have experienced a globally catastrophic asteroid collision, we have witnessed, first- or second-hand, numerous environmental events that can be categorized as “natural disasters”. Some organizations keep databases of these events, logging time and type and other statistics, and one of those organizations is the U.S. agency FEMA. While it suffers from being limited to one national territory, FEMA’s database has the advantages of being easy to access and to parse into usable data for sonification. So, although it is a small representation of natural disasters world-wide, it is a useful abstraction of contemporary natural disasters in general.
With FEMA’s database, in this case most useful for its timestamps, I wanted to see what might happen, musically, when two close approaches cross an actual natural disaster in time. Being relatively infrequent, these crossing points would constitute some kind of special event that would not only issue a new sound but also change parameters for the piece as a whole.
The sounds I ultimately associated with natural disasters were formed, like Close Approaches, through a convoluted process that is too complex to discuss here. The salient features of these sounds are that they are longer and more eventful than Close Approaches (after all, there are lingering and unforeseen repercussions to natural disasters), and they often have a distant vocal quality not unlike a cry or a howl. These sounds do not develop on a larger scale (unlike Close Approaches that pass through timbral and intensity “movements”); they only twist and heave until they stop, usually after another one has already begun.
Because crossing events are deemed “special”, they warrant more than unique sounds. The whole system should be affected. This is where I confronted the weaknesses in the Close Approaches algorithm. I set up an indeterminate pattern (here, a Markov chain) of sound subsets and tempo changes that would be triggered by crossing events. I’ll give an example to illustrate: Say that there are seven subsets of the eighteen sounds used in Close Approaches. Each of those subsets has a rhythmic divisor (the number that reduces the gap between close approaches) associated with it — subset 1’s might be 10,000, subset 2’s 12,000, subset 3’s 20,000, etc. Now, say that subset 1 is currently active, and a crossing event has occurred. There will be a 40% chance that we will switch to subset 2, a 15% chance for subsets 3 and 4, and a 30% chance that no change will occur. If the subset changes, the tempo changes (larger divisors lead to faster tempos), and we have a different set of probabilities for moving to other subsets. In this manner, the piece crawls through different timbres, tempos, and intensities, using forking paths that have deliberate structures, while leaving the actual journey undetermined.
With these additions and modifications, more breath and counterpoint emerged, but I soon felt that there was need for another voice — one that added some “swing”. And I wanted some conceptual counterpoint, as well — human consciousness, armed with everything it knows about how bad things can get, tends to maintain a persistent (if not entirely justified) optimism. This optimism is expressed by countless endeavors, perhaps by endeavors in general, but it gets a wild numerical expression in The Market.
Stochastic Volatility Jump-Diffusion
It is generally accepted that a stock price exhibits some form of Brownian motion (it’s more complicated than that, but let’s start there). Brownian motion is often illustrated by a “random walk”: picture yourself standing in one spot and taking a step in a random direction. Now you’re in a new spot. Take another step in a random direction. Another new spot. Keep doing this, each random step having no relation to previous steps. You’ll wind up wiggling around a space, perhaps ending somewhere far from where you started, perhaps not.
Now think about the length of your stride with each step. You could take short steps or long steps. If you take all short steps, it will probably take a long time for you to move very far; vice versa for long steps. We could call the range of possible step lengths volatility. Now we’re starting to sound like the market — prices move randomly up and down, and the range of expected up and down movement is the volatility. But what if, every now and then, you didn’t just step but jumped to a new position, further than a step could take you? Well, it seems that this more thoroughly expresses the behavior of stock prices over time: randomly moving up and down within a certain volatility, jumping beyond that volatility now and then, and, after each jump, getting a new, randomly determined volatility.
The mathematical model for this behavior is called Stochastic Volatility Jump-Diffusion, and it is used for things like setting the price of options. It’s the kind of neck-crimping math that induces instant eye-glazing in a large portion of the population, so I’ll spare you the details. Luckily, it has some significant similarities with Iannis Xenakis’s Gendyn waveform generator, a form of digital synthesis that relies on Brownian motion and has, among other things, variable step lengths. Since I already knew the algorithm and the sound world of Gendyn, it was reasonably simple to modify it to model Stochastic Volatility Jump-Diffusion.
Unlike the previous two sonification strategies, where a data set is used to launch and modify sounds that were created by other means, this modeling strategy produces a waveform that will constitute the sound itself. Because this waveform is dynamically generated by random means, it is possible, given the right parameters, that it will output long phrases with bursts of colored noise, sliding pitches, occasional silences, etc. Although the configuration used in Canopy of Catastrophes has a few more layers of complexity, it is reasonable to say that I supplied the necessary parameters to elicit these long phrases and found them to be highly complementary to the sound world as it stood thus far.
Shadow upon Shadow
At this point, the major elements of the piece were in place. The story of what happened next and why is less precise, largely because my concern shifted more squarely to the experience of listening to the music. Moments were cut, extended, and transformed for the sake of eliciting whatever aspect of the unhuman seemed to be lurking there. A shadow-piece, built from interludes (waves of noise with vocal interjections), was inserted for reasons that I hope are apparent if not easily defined. An entire coda, with its own strange story, was brought in to function as an ending and a slight recasting of the preceding material. Basically, I did what people who make these sorts of things do: I lived with the piece and followed intuitions in the hope of bringing everything into focus.
Of course, that’s a rather understated way of saying that I stumbled in the dark until the darkness was complete. In an account written after the fact, it is easy to be fooled into thinking that the process described was well-defined and deliberate, when, in fact, it was often a persistent and humbling erasure of plans, thoughts, control. As the mystic says, “It is my wish to leave everything that I can think of and choose for my love the thing that I cannot think.”