I'm writing all this without a clear idea why. Some days my music-making is minimalist and entirely therapeutic (if it happens at all), and today was one such day, and this post is part of the process. Among the useless text, maybe the tools I used could prove useful reference to someone else. And I realise this is more suitable for a video, but in the spirit of "not in the library", text it shall remain.
First things first, it sounds like in this Instagram post
And the setup looks like this:
In the past few years, when working on complex or otherwise challenging problems, I've reached many times for this mind-hack: an endless, unchanging, chord with a pleasing texture, that I leave playing at low volume literally for hours.
And even though today there wasn't anything specific, my brain seemed to demand to be left alone (if that makes any sense). I stared at the screen for quite some time in the morning, then pulled up a WebSDR tab and clicked around for some interesting sounds.
One thing that I find meditative lately is weather forecasts. I've developed a habit of listening to the Shipping Forecast on BBC Radio 4. The other day, to my mild shock, I discovered that the first time I've ever heard that must have been the year The Prodigy's first album — Experience — came out, as a sample is buried in one of their tracks. But my relationship with meteorological data goes back to my childhood, listening to river levels info on Bulgarian National Radio every afternoon, till the day we got a home stereo.
These disengaged voices, monotonically reading through heaps of data at what today is vastly suboptimal speed, and in a rigid format that I guess makes sense only to very few people, and those are probably equipped with devices that consume the data in a machine-readable format anyway, these voices have a quality that is hard to describe.
But, like an average nerd, I also like number stations, which also amplifies my interest in shortwave radio. Incidentally, the sound of shortwave radio noise, even when transmitted over WebSDR (wrecked by lossy compression), is like catnip to me. My ears just seem to love it, especially the way voice is saturated over AM broadcasts. Not bad for someone who grew up learning never to overload analog systems.
So naturally this all brings me to a broadcast like Shannon VOLMET. It is part of "a worldwide network of radio stations that broadcast TAF, SIGMET and METAR reports on shortwave frequencies". it is basically an automatic voice that reads, over and over, short weather reports for various locations, across Europe in Shannon's case. It sounds hypnotic.
So this morning I put that on.
As it's on short wave, I worried that Twente, Netherlands, would be too far to pick it up loud and clear during the day. Turns out the signal was stronger before it got dark (learn something every day I guess), but I wouldn't find that out — the hard way — until later.
I like to "worldise" voices like this, and commonly run speech sources through effect chains of delay, reverb, sometimes distortion, certainly EQ and compression, to change their character and make them generally sound different.
My RME soundcard comes with a powerful mixer/DSP suite that has built-in EQ, compression, reverb, and delay, so after setting up the upper-sideband SDR tuning, I added some EQ to notch out the resonance peak of the stream filter, and ran the voice through a reverb with large predelay and somewhat short time. Then a long, but quiet, delay.
I'd have done this processing "right", with correct routing to DAW, and effect plugins, but that felt like work, and I guess I was just playing.
The repeating of certain words like "becoming", "clouds", "wind", and "tempo", made the (automatic) voice click in some familiar and I guess slightly nostalgic way. It all sounded like it was on the PA system at some larger than life train station or airport of undefined character.
I sat there, listening for a few minutes, then opened Logic.
Normally I'd patch a synth drone like this in modular, but I wanted textures that my ear hadn't heard recently even if those were simple. In Alchemy I keep a list of favourite sounds, so I "propped" an infinite A major chord, extended further to lose its identity somewhat, and become more ambiguous as it spread over the octaves. Then left that playing.
For that to stay on for hours, I had to make it move without moving - so I split the voices between three separate tracks, looped the notes at different lengths, and added tremolo modulation to each at very low rate: 0.04Hz, 0.05Hz, 0.06Hz. Then fed all three tracks into a common overdrive on the mix buss, so as the voices went up and down in amplitude, they'd take turns commanding the "colour" of the drone.
As Logic was playing this chord, it would go through the same effect chain of the interface mixer as the web browser, getting the same "space" treatment as the Volmet broadcast.
Things like this I largely see as ephemeral - I construct them, let them last a while, then give them away to the aether.
I set out to just leave something playing in my room at first. But then, as occasionally happens, I wanted to record some of it, by which this time I meant hours.
Turns out that presented some challenges. My recorder's batteries wouldn't last so long, and it looked like a lot of work to try and find fresh batteries.
So I looped back the RME's master output. I love RME Fireface UCX for that reason. It has allowed me to record sounds as they happen inside the computer. Frankly, every audio interface should come with a feature like this.
Except, my master output is channels 11 and 12, because it stays in digital for my spectrum analyser and room EQ. I could loop it back via 1 and 2 but I'd have to duplicate all the routing I had already done, which also looked like work.
Then I had to find a recorder that would record directly to a file of my choice, not some random temp file, which might get lost if something crashed (you develop hunches with time). To record on the command line in macOS, Stack Overflow suggested "sox". That crashed and anyway, recorded weirdness, and I had no patience to study its parameter switches.
So I reached for a tool I should value higher but I don't — namely FFMpeg. By default it captured all 18 inputs. Just two took a little digging, but here's how to record a specific pair of input channels (10 and 11 in my case; it counts from 0):
ffmpeg -f avfoundation -i ":0" -acodec pcm_s24le -ac 2 -ar 48000 -af "pan=2c|c0=c10|c1=c11" rec.wav
...which roughly translates as "capture no video, the first audio", to 24 bit signed little endian integer, two channels, at 48kHz, use the 11th input as first channel, and the 12th as the second. Output to "rec.wav".
2h 45min later
By the time it was dark, the Shannon signal was drowning in noise, too much for the recording to be entirely pleasant. The voice still audible, SDR's auto-gain was amplifying too much noise. Not that bad, but not great either.
I ended up with a file that is more than two and a half hours of the same stuff as on the one-minute instagram post above.
It made me think how our "music needs" have changed, and obviously how easy it is to produce this "automatic music" blend, from sound available out there in the aether, streamed to your home, shaped, and blended with home-made elements.
Not a special technique - it's well within reach of everyday computers or perfectly accessible (as in cost) music toys. And I doubt there's anything new to this genre. For instance You Are Listening To, mixing scanner radio and synth drones, is something like 10 years old by now.
So this pushes no boundaries.
And yet, for all the cliché, the formula brought me comfort, as it has done for me in the past. Today, "Shannon Saturday", was a completely unproductive day. Except for this.