Category: Performance

  • Spring Elegy: RadioSpiral Spring Equinox Performance

    TL;DR: Giving myself a C on setup, an A on visuals, and an A- on the overall performance.

    As usual with a complicated setup, even though I worked hard for it to be less so this time, I had a major glitch which forced me to lose about 15 minutes of performance time. This performance’s setup was intentionally less complex, but still bit me. I have figured out some things that will keep me from losing the tools to repeat this performance, so that’s something. Anyway. Onward to the rest of this post.

    The setup

    I decided to minimize the possible sources of problems by doing everything on the computer this time. I had problems last time with the interface (mostly because it is TOO BLOODY COMPLICATED) and I decided to eliminate it, so no hardware synths. I also had problems with the iPad staying connected, so I pre-performed one part of the set (erroneously, it turned out, more later) so that I could trigger playback exactly when I wanted it so that the timing of the set would leave me five minutes to hand off to the next artist.

    So first, I recorded the audio from the iPad portion of the performance. I rushed a bit on this, and didn’t realize that I’d set up Garageband to record it in mono. It’s not terrible, just not as good as the stereo original. I’ve made a note to re-record that later in stereo, but I’ll record it as a separate track in GB so I don’t lose the original.

    I then moved it forward in time so that it would end at 55:00; this lets me simply hit play in GB when I start and have the recording start and stop exactly when I want it…if all goes well.

    The rest of the performance was in three parts:

    • A Live session with the base thunderstorm I was using as a continuum through the piece, with added birdsong, bells, and gongs played back as clips.
    • An miRack session (more on why that instead of VCVRack in a sec) that let me fade in and out continuously-running harmonizing lines
    • A second live set continuing the thuderstorm, but using shortwave radio samples, and bringing back one birdsong sample from the other set.

    Everything used the same harmonic basis (this was accidental, not on purpose, but I’ll take it), which let me establish a mood with the first set, fade in the miRack performance, build it up, and then gradually fade it in and out while I perform the clips in the third set. Partway through the miRack session, the pre-recorded GB track starts, also in the same key, allowing it all to stitch together as a coherent whole.

    Visuals

    I decided to use my standard OBS setup for this performance, and it mostly went okay. As a matter of fact, it streamed the audio even though the stream to the station did not work initially (see below). The greenscreen plugin, with black tweaked to transparent, allowed me to overlay the visualizer on the various apps and combinations of them — this wek really well! — and switch things around as I performed.

    I used Ferromagnetic for the visuals during my set; a composite device was visible and I tried that; it seemed to work way better in terms of Ferromagnetic “listening” to the music. This appears to be dynamically created by Audio Hijack.

    After my set, I was able to hook up an Audio Hijack setup that just took the streaming audio from the station (via Music.app) and ran it to the standard output, which allowed me to use both the standard Music.app visualizer (the old-school one; it’s much more visually appealing to me) and Ferromagnetic. I set up “studio mode” so I could watch both sources and crossfade when the visuals were particularly striking in one or the other.

    This worked really well, and I will probably do this again (or Rebekkah will) so that we always have Twitch visuals during all of the performances.

    Issues

    First, my incorrect recording of the iPad performance meant that the soundstage was overly dense, but it still sounded okay, just not as good as it could have.

    Second, I set up ahead of time, and Audio Hijack,which I was using as my funnel for the sound, stopped passing the audio down the path! I struggled with trying to pull blocks out of the path to get it working, but in the end I was forced to reboot the machine in the hope that it would resume working again. Luckily, it did, but this meant that OBS went offline, the music went offline, and I got logged out of Second Life. It took me a significant amount of time to realize I hadn’t gotten back to the concert venue in Second Life after I got the music and OBS running again.

    Third, I didn’t watch my levels, and the overall signal was very hot. I think the final recording doesn’t quite clip but it’s a close thing. Next time I add a limiter to Live, and check the levels more closely with everything running in a test Audio Hijack session beforehand so I can crank the sliders while playing and not need to monitor the overall levels.

    For next time

    • Set the level limits ahead of time so I don’t go quite so loud.
    • Use at least one more machine to offload some of the work. This does move me back toward more complex, but it removes the single point of failure I had this time. This will need some experimentation, but I think visualizers, OBS, Audio Hijack, and probably the performance software have to be on one machine, and Discord and Second Life on another.
    • Have a better checklist. The one I had worked to get me through the performance, but it didn’t have a disaster recovery path. That needs to be thought out and ready as well.
    • Have something ready that can take over if the whole shebang is screwed. No ideas on this yet, but I want to have a “panic button” to switch to a dependable stream from somewhere else if my local setup goes south. I think I can set up a “just for this performance” playlist on Azuracast that I can have ready to trigger if the performance setup dies.
    • Set an alarm to reboot and verify the setup half an hour or 45 minutes (do it and time it) prior to showtime, so that I arrive at my slot with everything ready to go and configured to hit “stream” and have it work.

    Things I did figure out to fix problems from previous sets

    I’ve saved the Live sets this time with all their clips and setup. I still wish I had The Tree, 1964 setup, but it got lost completely. This time, I’ve definitely got all the samples, all the patches, all the clips, all the setup so that I won’t misplace any of it and I can re-perform this piece.

    I’ve also saved the Garageband session and the miRack patch in the same folder, along with my performance notes, so that I can easily re-run everything straight from that folder without a hitch.

    This is all saved on the external disk which is backed up by Backblaze, so it’s as safe as I can make it. I plan to keep doing this for future work so that I am always able to pull up a previous performance and do it again if I want to.

  • Show report: 2020-10-31 “Pharoah Nuff” at radiospiral.net

    My last performance was not as smooth as I hoped, so this time I decided that I would find a way to streamline it even further.

    I decided to go further in the direction I’d taken with the Wizard of Hz show, and strip down even more. I decided to try to perform as much as possible of the set on the iPad, and use the laptop solely for streaming and Second Life. This freed me from hassles in switching setups in VCVRack, Live, and the other software I’d been using, but it also meant that I wouldn’t be using either of my favorite synths for this performance (the Arturia 2600 and Music Easel).

    Having had some time between performances to really experiment with AUM and I felt comfortable using it to lay out my performance. I decided that I wanted to keep Scape as my background/comping program, and that I’d set up a series of light-handed scapes to give me a through-line. I then sat down with MIRack and Ripplemaker to create multiple Krell textures that I could bring in and out, and also discovered a couple of lovely lead patches for Ripplemaker that I paired with a Kosmonaut looper. I also brought in a couple public-domain samples from old sci-fi movies, heavily processed with Kosmonaut again, and felt like I had enough material to do an hour’s performance.

    I used the iConnect Audio4+, which I now finally have the hang of, and set it up so that I had two stereo channels from the iPad and one mono channel routed to the iPad through Kosmonaut (again!) for some subtle reverb when I was doing my intro and outro. With the setup I used, the iConnect kept the iPad fully charged through the whole set.

    I used Loopback to connect the multiple outs from the iConnect to the stereo ins on my Mac, and monitored on headphones. I pulled up Audio Hijack, entered the stream setup, and was ready to broadcast.

    I got up early on the day, started up AUM, and ran a soundcheck to make sure everything was working. All sounded good, and I was good to go.

    Mostly.

    I didn’t stop AUM, and as a result, it ran for several hours before I tried to start using it. This apparently triggered some kind of a memory shortage, and when I started streaming, I was completely mute. Fortunately, I’d cued up a prerecorded VCVRack texture, and started that while I was trying to figure out what was wrong. I gave up and restarted the iPad, and AUM came up like a champ.

    After that it was pretty smooth. I was able to fade the various patches in and out, play the sci-fi samples, and improvise over the Scape-provided background. Once it was off the ground, the performance was very easy to do. I did forget and leave the audio feed from Second Life enabled, so as a result this was a very sparse performance, but the sparseness worked out very well.

    Overall this was a great way to do a performance and I plan to refine this further. Of particular note is that AUM saves things so well that it will be trivially easy to do this performance again, should I decide to; this is probably the first time I’ve had a performance setup I felt was robust enough to say that!

  • RadioSpiral Wizard of Hz Performance Notes

    Last time I did a live streaming performance for an audience, it did not go well. I had long pauses, the mic didn’t work, and miscommunication over Slack to the remote venue resulted in my getting cut off before my set was finished. And this was even after a good bit of practice.

    So when I signed up for the Wizard of Hz concert on RadioSpiral, decided that I needed to have as much backstop as possible in place so that no matter how tangled up I got mentally, I’d have a fallback to something that sounded good and would be a nice navigable arc from point A to point B. Ideally, I should have something that would sound great even if I got called away for the entire set!

    My go-to process for this is Scape. I’ve had it since it first came out, and it meshes very well with what I enjoy hearing and enjoy playing. I started off with the Scape playlist that I often use to relax and get to sleep; this is a seven-scene playlist, with the transition time at max, with the per-scene time adjusted to be just a bit over an hour. This gives me a fallback for the whole hour; I can pull everything else back and lean on Scape while I decide what the next section should be.

    In addition, Scape provides a very nice backdrop to improvise over, so I can be playing something while Scape gives me a framework.

    I then put together a couple of Ableton Live sets: one built on the Arturia ARP 2600 and Buchla Music Easel emulations, and another built on Live’s really nice grand piano and the open-source OB-Xa emulator, the OB-Xd. I finally figured out how to change patches on the OB-Xd about 20 minutes before showtime.

    I had set up a piano with a nice looping effect from Valhalla Supermassive (Supermassive and Eventide Blackhole figured heavily in the effects), but ended up not using it, and doing a small Launchpad set instead using the Neon Lights soundpack.

    I was also able to open and close with the large singing bowl, played live and processed through the Vortex, which was a nice real analog performance touch.

    Overall, I strove for a set that sounded played-through, but that had enough breathing room that I could fall back on Scape while making changes (switching Live sets, etc.), and I think I achieved that.

    I did have Audio Hijack recording the set, so if it sounds OK, I’ll be releasing it on Bandcamp. (Followup: it came out pretty well! Definitely at least an EP.)

    Only real issue was a partially-shorted cable between my iPhone and the mixer that I didn’t figure out until most of the way through the set.