Category: Music

  • sd1diskutil: Resurrecting a disk format from the dead for a brand-new emulator

    A little history

    The Ensoniq SD-1 is a synthesizer from the 1990s — a ROMpler with a Motorola 68000 at its heart. Like many synthesizers of the era, it uses the cheap, easy, and simple storage medium of the day: 800K floppy disks for storage of everything: factory programs, user programs, presets (organized small sets of programs), full-up MIDI sequences, and its own operating system.

    The format is proprietary and somewhat peculiar: a custom FAT, 10 sectors per track numbered 0 through 9 (not 1 through 10 like a PC), big-endian multi-byte fields throughout, and a handful of file types that the rest of the computing world has never heard of. To retrieve your data on a modern computer, or to get sounds back onto the synth, you need something that can speak this format.

    Few if any USB disk drives can handle this format; the extant programs which can read Ensoniq disks all run under MS-DOS (or Windows DOS emulation) and need a real, wired-in diskette drive to handle reading and writing disks. Forget about doing this on a Mac.

    Fortunately, the SD-1 has a reasonably robust MIDI system-exclusive, or “SysEx”, implementation, capable of dumping and receiving pretty much everything except the actual sequencer OS that can record sequences to the SD-1’s internal memory and play them back. Those of us who saw the handwriting on the wall (and who didn’t want to keep a 486 tower lying around just to write the floppy disks that were becoming harder and harder to find anyway), took the earliest possible opportunity to dump everything out over SysEx and save it elsewhere.

    Getting the sequencer OS back into the thing still needs a diskette, which is an issue (solved by third-party add-ons that could store hundreds of floppy images on a USB stick).

    The renaissance

    But there was some big news in March 2026, that made the question of accessing the SD-1’s disks and data an interesting topic again.

    The folks at Sojus Records announced a wrapper around the previously-created SD-1 MAME emulator that allowed the SD-1 to be loaded as a VST3 plugin.

    For all of us who had SD-1’s (or who still have them, but have shifted to much-more-convenient computer-based sequencing), this was a sit-up-and-take-notice moment. Our baby was now a plugin! And all that work we’d done previously was now usable again.

    However! The first release of the plugin was only able to read .IMG files — a file format created by Gary Giebler to store floppy images on disks other than floppies. This meant that there needed to be a way to get .syx SysEx files back onto .IMG images so they could be used once more.

    Sure, the Giebler and Rubber Chicken utilities were still out there, but I’m a Mac guy, and attempts to get those running properly on emulated MS-DOS were pretty much a failure. What I needed was a utility that could read and write disk images on my Mac.

    A year ago I would have looked at that and said, “man, I do not have the time or the patience to read all those Transoniq Hacker articles and try to piece this together.” This year, I didn’t have to have that patience: I had Claude, and $20 worth of tokens a month to spend, so I thought, why not? This is actually a fairly well-defined problem:

    • Documentation for the disk organization and file formats exist in this PDF archive of the Transoniq Hacker
    • We have some disk images that we know work with the emulator, including a sequencer OS disk
    • The emulator seems to be able to read .IMG files fine, so if I can figure out how to write disks, I should be able to read them on the emulator.

    This is a pretty solidly mapped-out basis to start from, and I figured that with both good documentation, sample data, and a working system to test against, I stood a pretty good chance of being able to carefully steer Claude to a solution.

    Getting started

    I decided that I wasn’t going to be fancy here. This is going to be called sd1diskutil because it’s just going to be a wrapper around a library that knows how to do the job.

    So on March 26th, I sat down with Claude in the terminal, loaded obra/superpowers, and started brainstorming. I decided that, contrary to my more recent utility projects, I’d try to produce something that could be embedded into a prettier interface than just the command line.

    That meant the first decision was to what language to implement this in, and after some discussion with Claude, we settled on Rust, to allow me to make this a functional programming approach, using very tight types and operations on them. This was, in hindsight, probably colored by my experiences with Scala and really tight types, and how that made it so much easier to write correct code.

    To go with that and make it usable, we came up with a very thin CLI binary (sd1cli) that could convert MIDI SysEx dumps to and from the SD-1’s on-disk binary format provide full disk management — list, inspect, write, extract, delete, create.

    Because I knew that eventually I wanted to wrap this up in a pretty UI, I asked Claude how to build Swift bridging in, and it included a clean UniFFI surface for SwiftUI, allowing the same library to eventually power a macOS application.

    I knew that I was going to have to be very careful to implement this correctly. File systems and custom file formats are not forgiving, and the SD-1’s OS, though quite capable, is not what you would call robust.

    • The architecture therefore mirrored the data as closely as possible and tried to ensure that everything was as safe and stable as possible:
    • DiskImage owns the raw 819,200-byte image.
    • FileAllocationTable is a stateless handle that operates on a &mut DiskImage to avoid Rust borrow conflicts.
    • SubDirectory follows the same pattern.
    • SysExPacket is the only place nybble encoding and decoding happens — every layer above it works in plain bytes.
    • Atomic writes everywhere: save to a temp file, then rename.

    I started out with a blank disk template created by the SD-1 emulator. What better source for a good disk than the emulator itself? (Oh, you sweet summer child. We’ll spend about three days beating our heads against this disk image.)

    The first working implementation was achieved in a single burst: we planned out the types and operations, reviewed the design, and then built an implementation plan: workspace scaffold, error types, disk image, FAT, directory, SysEx parser, domain types, the full CLI. Integration tests for every command. The code compiled. The tests passed.

    Then came the reality checks.

    Block 4? Or block 5?

    The Giebler articles said that the FAT should be at block 5. But our empty disk image from the emulator said it was at block 4. Everything else matched up: ten blocks long, 170 three-byte entries each. What was the problem? We could write files with our code to the blank image, and the SD-1 emulator could read them.

    Figuring this took way longer than it should have, for a reason that only became clear in retrospect.

    The blank disk template used during early development, as mentioned above, had been written by the Sojus VST3 plugin, which it turned out was concealing a sector-shifting bug that was actually happening at the underlying MAME emulator level!

    See, normal DOS disks have 11 sectors per track, numbered 1 through 11. The SD-1 disks have ten, numbered zero to 9! MAME does handle the ten sectors per disk thing fine…but it uses SD-1 blocks 1 though 9, dropping block 0 and adding an empty all-zeroes block 10. So a fresh emulator-written disk has its FAT at block 4 instead of block 5. And a freshly-written emulator disk also immediately throws a DISK ERROR – BAD FORMAT if you try to read it…but we were only trying to write it, thereby breaking it, and then read it with our Rust code!

    The Giebler article in Transoniq Hacker said block 5 and our write code was written to put it at block 5, which was correct. The disk from the emulator said block 4, because the emulator dropped block 0. The emulator could read the test disks written by our code fine — because sd1diskutil‘s own writes were correct.

    But whenever the emulator saved a file back, it would quietly apply the shift again, moving the FAT back to block 4, exactly matching the initial (broken, but we didn’t know it) blank disk. So we went round and round, trying to resolve this: the article says 5, the disk says 4, the emulator reads 5 and writes 4, maybe both are correct and the article didn’t have that, so we should support either, or…?

    I repeatedly tried to add files to the disks written by the emulator, and they always got DISK ERROR – INVALID FORMAT. How was I screwing this up?

    I finally figured it out when I wrote a file to a good (block-5) disk and immediately tried to read it back (from the now block-4 disk). The emulator immediately threw a BAD DISK error…on its own output! So the emulator was wrong (though at the time we didn’t know why — see below!), and the article was right.

    We created an empty disk by taking a copy of the known-good SEQUENCER-OS disk and deleting all the files from it using our code. We then wrote a single file to it and tried it on the emulator…and the disk was readable and the file was there.

    Other early problems and fixes followed the same discipline of checking what we wrote against the emulator: local filenames needed to be forced to uppercase because the SD-1’s LCD doesn’t render lowercase. We had to analyze AllPrograms and AllPresets files on the SD-1 SEQUENCER-OS disk to figure out how to write them, and analyze how these were encoded into SysEx types (the SD-1 MIDI implementation helped some, but a lot had to be worked out from just trying things until we worked them out). The free block count management logic needed a rewrite. Each fix came from trying it and checking it against what the emulator would accept, not just blindly accepting the documentation, useful though it was,

    The Program Interleave Bug

    Of all the bugs, the program interleaving bug was the most insidious and hardest to fix, because it was wrong in a way that “worked”. Nothing broke, the OS didn’t complain, and on first glance seemed to be completely reasonable and perfectly correct…but sequences played back with all the wrong programs if they were being loaded from program memory. (ROM patches were fine.)

    Programs stored in a SixtyPrograms file are byte-interleaved on disk: there are two independent 15,900-byte streams packed together, with the even byte positions carrying programs 0 through 29, and the odd byte positions carrying programs 30 through 59.

    The original SysEx-to-program bank implementation had picked then outas alternating pairs — taking a program from the first half for bank 1, program 1, then a program from the second half for bank 1, program 2, and so on. The result was that programs were extracted perfectly, but landed at the wrong bank and patch position.

    I was able to figure this out by loading a custom 60-sequence file with 60 embedded programs, playing it back, and seeing that the expected patches were there but in the wrong places. Unfortunately, the fact that the last time I’d actually looked at these sequences was 2010 or so meant that I didn’t remember where the right position was! I knew they were wrong because they sounded wrong, but not where the right place was.

    It wasn’t until I found a SysEx dump of one of the factory sample sequences that we were able to write that to a .IMG and load it, then compare the locations where the programs went when our file was loaded in contrast to where they ended up when they were loaded from the SEQUENCER-OS disk by writing them down by bank and slot both ways and letting Claude figure out the mapping, which it did quite nicely.

    Extracting the even and odd byte streams in both the file we wrote and the “good” on-disk one, and then searching for known program names within each stream, allowed Claude to find where the one patch I definitely knew the sequence used was in both sets of interleaved data and then derive the correct mapping: a first-half/second-half split rather than alternating pairs.

    In the process of figuring all this out, we created a Python analysis tool, dump_programs.py, which could extract and list individual programs from multi-program SysEx dumps and disk files. Once we verified the extraction algorithm in the Python code, we were easily able to replicate it in Rust, and test it with two other sequence-and-program dumps by verifying they played back correctly on the emulator after being written to disk from a SysEx file.

    Sequences and a Deeper Problem

    Extracting sequences revealed a gap in the implementation: we could write SysEx files containing sequences and patches, but we couldn’t read them. There was a function that converted SysEx to the on-disk representation, but nothing that went the other direction.

    We discovered this when we realized that extraction was wrapping raw disk bytes in a SingleSequence SysEx header and producing output twice the expected size. Proper test-first setup allowed us to easily implement this properly, writing a 60-sequence-and-program file to disk, verifying it sounded correct on the emulator, and then reading it back out to .syx, verifying that the new file was a byte-for-byte match against the original.

    At this point, I wrote up the block-4/block-5 bug for the Sojus folks to take a look at on GitHub. (At this point I knew that there WAS a bug, but not WHY there was a bug.) They got back to me very quickly, and confirmed that yep, the emulator was screwing up the disk, and why.

    The plugin routes all floppy writes through MAME’s get_track_data_mfm_pc, a function that expects PC-standard sector numbering (1–10). As mentioned, the Ensoniq format uses 0–9. MAME silently discards sector 0 of every Ensoniq track, shifts the remaining sectors down by one, and zeros the last slot! Once the emulator rewrites the track, every block on the track contains wrong data, and every tenth block is zeroed. This was the same bug that had broken the early blank disk template and sent us chasing the wobbly FAT location for days — now fully understood, and confirmed with the Sojus developers, who identified the affected code as esq16_dsk.cpp in MAME — the DOS-to-Ensoniq-and-back block mapping.

    They’re busy working that as of 3/28, but in the meantime, they found a workaround: the emulation code can also read HFE format files. HFE stores the raw MFM flux data and bypasses MAME’s sector enumeration and extraction entirely. Which is totally awesome…but the sd1diskutil code did not speak HFE, and I’d never even heard of HFE. Watching the wonderful Usagi Electric suss out data encoding has educated me a little bit on data transitions and stuff like that, but it wasn’t something I was ready to work on myself at all!

    Archaeology Before Engineering

    Fortunately, the Sojus folks had an HFE image with some data on it to test with: two single patches (OMNIVERSE, SOPRANO-SAX) and a 60-program file. Claude and I embarked on trying to make sense of the data in this file, and this is where Claude seriously impressed me.

    We started off with this HFE file. We knew basically that it should have MFM data in it, and nothing else. Claude bootstrapped up from knowing what MFM data should look like to actually finding it in the file and making sense of this otherwise opaque stream of bits!

    Claude’s first attempt at locating sector headers found nothing. The standard MFM A1* sync marker should have been there ([0x44, 0x89]) but did not appear anywhere in the file. Claude figured out that this was because HFE stores bits LSB-first per byte, in the order the read head encounters them. The standard representation is MSB-first, so at first glance the data made no sense. Claude tried a bit-reversed version of the data, then a bit-reversed-per-byte version, and found the sync marker! [0x22, 0x91], with each byte bit-reversed.

    Once that hurdle was crossed, it was simple for Claude to find the markers and decode all 1600 sectors. The FAT free count in the decoded image matched the hardware OS block count: 1510. The block-to-sector mapping was confirmed against blank_image.img:

    block = track × 20 + side × 10 + sector

    Track geometry was pinned down: each side of a track is exactly 12,522 encoded bytes. The fixed preamble (Gap4a, sync, Gap1) consumes 284 bytes. Each of the 10 sectors is 1,148 bytes with a fixed structure. The remaining 758 bytes are inter-sector gaps — 75 bytes for sectors 0 through 8, and the rest absorbed by sector 9.

    I would have taken quite a long time and a lot of poring over the MFM spec and a lot of trial and error to figure this out, if ever, and Claude had it all doped out in half an hour or so.

    At this point Claude knew how to read an HFE file but not what we should do with it.

    We invoked the brainstorm/design spec/implemetation plan path again. I proposed that what we needed was a translation layer: just get the HFE image to an IMG image, and all of our tools could easily handle it. To use it on the emulator, we’d just convert the IMG image back to HFE, which the emulator should safely be able to read and write.

    Superpowers wrote a complete design spec before any Rust code was touched, pulling in all the information Claude already had at hand about how the HFE files work, writing Python code to cross-check assumptions made in the spec: exact constants, the complete MFM encoding rules, CRC16-CCITT coverage, the interleaved side storage layout, error variants, test cases, the works.

    Then superpowers wrote the implementation plan, with concrete function signatures, and specific expected outputs, all properly built as functions on types and easily testable.

    HFE Implementation, going full vibe

    At this point it was Claude’s party. I had read the spec and the plan, and everything looked reasonable, but I didn’t really know the HFE spec solidly enough to critique the code.

    Claude created three new error variants: InvalidHfe, HfeCrcMismatch, HfeMissingSector, each carrying track, side, and sector context so errors are never ambiguous. Then hfe.rs itself: 771 lines creating the full encode/decode pipeline, and new CLI subcommands hfe-to-img and img-to-hfe to encode and decode HFE images.

    The superpowers code review caught one bug: header offset 17 — the “do not use” field in the HFE v1 spec — was being written as 0x00. The spec requires 0xFF. A strict HFE reader would reject the file, and we knew the right answer, so…easy fix.

    After implementation, the code passed all the low-level tests, and read_hfe on the sample HFE file properly decoded all 1600 sectors, returning a DiskImage whose directory listed OMNIVERSE, SOPRANO-SAX, and 60-PRG-FILE with the correct free block count. A complete round-trip from .img to HFE and back produced a byte-for-byte identical result.

    The Acid Test

    The final test of the HFE pipeline started with the Sequencer OS disk: an 800K image containing every file type the SD-1 supports: Thirteen OneProgram files. Eleven SixPrograms banks. A ThirtyPrograms bank, eight SixtyPrograms banks, four TwentyPresets banks, eight sequence files of various sizes, and the sequencer OS binary itself (656,384 bytes!), totaling forty-nine files, and leaving just five free blocks.

    The disk was encoded to HFE and loaded into the emulator. Success! The emulator accepted the disk, an everything was present. I selected the sequencer OS, hit load, and it was loaded successfully. A previously-loaded sound bank in emulator memory contained a program named GREASE-PLUS, definitely not one already on the disk. I saved it to the HFE disk, and it wrote successfully.

    We decoded the modified HFE file to an IMG and listed the contents: fifty files. Three free blocks. GREASE-PLUS in disk slot 13, a OneProgram file, two blocks. Complete success!

    Future Plans

    Now that this is done, I plan to release it on GitHub as a library. If I get around to writing the pretty GUI, I will probably see if I can sell that, because why not? The Giebler disk utilities still sell for $60!

    At any rate, the CLI will be out there and should work, for anyone who wants to build it themselves.

  • Spring Elegy: RadioSpiral Spring Equinox Performance

    TL;DR: Giving myself a C on setup, an A on visuals, and an A- on the overall performance.

    As usual with a complicated setup, even though I worked hard for it to be less so this time, I had a major glitch which forced me to lose about 15 minutes of performance time. This performance’s setup was intentionally less complex, but still bit me. I have figured out some things that will keep me from losing the tools to repeat this performance, so that’s something. Anyway. Onward to the rest of this post.

    The setup

    I decided to minimize the possible sources of problems by doing everything on the computer this time. I had problems last time with the interface (mostly because it is TOO BLOODY COMPLICATED) and I decided to eliminate it, so no hardware synths. I also had problems with the iPad staying connected, so I pre-performed one part of the set (erroneously, it turned out, more later) so that I could trigger playback exactly when I wanted it so that the timing of the set would leave me five minutes to hand off to the next artist.

    So first, I recorded the audio from the iPad portion of the performance. I rushed a bit on this, and didn’t realize that I’d set up Garageband to record it in mono. It’s not terrible, just not as good as the stereo original. I’ve made a note to re-record that later in stereo, but I’ll record it as a separate track in GB so I don’t lose the original.

    I then moved it forward in time so that it would end at 55:00; this lets me simply hit play in GB when I start and have the recording start and stop exactly when I want it…if all goes well.

    The rest of the performance was in three parts:

    • A Live session with the base thunderstorm I was using as a continuum through the piece, with added birdsong, bells, and gongs played back as clips.
    • An miRack session (more on why that instead of VCVRack in a sec) that let me fade in and out continuously-running harmonizing lines
    • A second live set continuing the thuderstorm, but using shortwave radio samples, and bringing back one birdsong sample from the other set.

    Everything used the same harmonic basis (this was accidental, not on purpose, but I’ll take it), which let me establish a mood with the first set, fade in the miRack performance, build it up, and then gradually fade it in and out while I perform the clips in the third set. Partway through the miRack session, the pre-recorded GB track starts, also in the same key, allowing it all to stitch together as a coherent whole.

    Visuals

    I decided to use my standard OBS setup for this performance, and it mostly went okay. As a matter of fact, it streamed the audio even though the stream to the station did not work initially (see below). The greenscreen plugin, with black tweaked to transparent, allowed me to overlay the visualizer on the various apps and combinations of them — this went really well! — and switch things around as I performed.

    I used Ferromagnetic for the visuals during my set; a composite device was visible and I tried that; it seemed to work way better in terms of Ferromagnetic “listening” to the music. This appears to be dynamically created by Audio Hijack.

    After my set, I was able to hook up an Audio Hijack setup that just took the streaming audio from the station (via Music.app) and ran it to the standard output, which allowed me to use both the standard Music.app visualizer (the old-school one; it’s much more visually appealing to me) and Ferromagnetic. I set up “studio mode” so I could watch both sources and crossfade when the visuals were particularly striking in one or the other.

    This worked really well, and I will probably do this again (or Rebekkah will) so that we always have Twitch visuals during all of the performances.

    Issues

    First, my incorrect recording of the iPad performance meant that the soundstage was overly dense, but it still sounded okay, just not as good as it could have.

    Second, I set up ahead of time, and Audio Hijack,which I was using as my funnel for the sound, stopped passing the audio down the path! I struggled with trying to pull blocks out of the path to get it working, but in the end I was forced to reboot the machine in the hope that it would resume working again. Luckily, it did, but this meant that OBS went offline, the music went offline, and I got logged out of Second Life. It took me a significant amount of time to realize I hadn’t gotten back to the concert venue in Second Life after I got the music and OBS running again.

    Third, I didn’t watch my levels, and the overall signal was very hot. I think the final recording doesn’t quite clip but it’s a close thing. Next time I add a limiter to Live, and check the levels more closely with everything running in a test Audio Hijack session beforehand so I can crank the sliders while playing and not need to monitor the overall levels.

    For next time

    • Set the level limits ahead of time so I don’t go quite so loud.
    • Use at least one more machine to offload some of the work. This does move me back toward more complex, but it removes the single point of failure I had this time. This will need some experimentation, but I think visualizers, OBS, Audio Hijack, and probably the performance software have to be on one machine, and Discord and Second Life on another.
    • Have a better checklist. The one I had worked to get me through the performance, but it didn’t have a disaster recovery path. That needs to be thought out and ready as well.
    • Have something ready that can take over if the whole shebang is screwed. No ideas on this yet, but I want to have a “panic button” to switch to a dependable stream from somewhere else if my local setup goes south. I think I can set up a “just for this performance” playlist on Azuracast that I can have ready to trigger if the performance setup dies.
    • Set an alarm to reboot and verify the setup half an hour or 45 minutes (do it and time it) prior to showtime, so that I arrive at my slot with everything ready to go and configured to hit “stream” and have it work.

    Things I did figure out to fix problems from previous sets

    I’ve saved the Live sets this time with all their clips and setup. I still wish I had The Tree, 1964 setup, but it got lost completely. This time, I’ve definitely got all the samples, all the patches, all the clips, all the setup so that I won’t misplace any of it and I can re-perform this piece.

    I’ve also saved the Garageband session and the miRack patch in the same folder, along with my performance notes, so that I can easily re-run everything straight from that folder without a hitch.

    This is all saved on the external disk which is backed up by Backblaze, so it’s as safe as I can make it. I plan to keep doing this for future work so that I am always able to pull up a previous performance and do it again if I want to.

  • Using Perl to simulate a numbers station

    On the Disquiet Junto Slack, one of our members posted that they’d had a dream:

    I had a dream about a piece of gear last night. I wouldn’t say that it was “dream gear,” though it was still cool. It was a small black metal box, about the size of three DVD cases stacked on top of each other. There were a few knobs and sliders, a small 3-inch speaker, headphone out, and a telescoping antenna, so it kinda looked like a little radio at first. The antenna was there for radio reception but there was other stuff going on. It was intended to be used as a meditation/sleep aid/ASMR machine. There were sliders for a four-band EQ and a tuning knob for the radio. The tuning knob had a secondary function that tuned a drone sound (kinda sounded like a triangle wave fed through a wavefolder/resonance thinger). The other feature of this box was something like a numbers stations generator. Another slider was for the mix between the drone and a woman’s voice speaking random numbers and letters from the NATO alphabet in a Google Assistant-/Alexa-/Siri-type voice but with far less inflection. The four-band EQ was to be used like a mixer as well in that it was how a person could adjust how much of the radio signal was audible over the drone/numbers by using the output gain of the EQ. There was also a switch that fed the drone/numbers signal into the EQ as well. The EQ was intentionally low-quality so that when you took it above 0dB, it would distort.

    The Disquiet Junto Slack, #gear channel

    Now what was weird was that I’m been doing something like this in AUM; I had a quiet ambient Dorian sequence driven by ZOA on several instances of KQ Dixie (a DX7 emulator), and was using Radio Unit (a radio streaming AU) to layer in some birdsong. I realized I could mostly emulate the dream box if I added another Radio Unit to pull in some random stations, but generating the “numbers station” audio was more of a challenge – until I remembered that OS X has the say command, that will let you use the built-in speech synthesizers to pronounce text from the command line.

    I sat down, and after some fiddling (and looking up “how to add arbitrary pauses” so the rhythm was right), I created NATO::Synth to create the strings I wanted and pass them to say. It has a few nice little tweaks, like caching the strings created so it can decide to repeat itself, and properly inflecting the start and end of each “sentence”.

    I saved the generated audio (recorded with Audio Hijack) to iCloud, loaded it into AUM, and then recorded the results. Very pleased with it!

  • iTunes Swedish Death Cleaning

    If you haven’t heard of “Swedish Death Cleaning”, the idea is that when you finally do drop dead, it’d be polite to not saddle whoever is taking care of your stuff with a big job of “is this important? should I keep it? should I just give all this away, or throw it away, because it’s just too much?”. Also, living with just the stuff that actually means something to you on a daily basis, as opposed to “I may want this someday, so I’ll keep it in my live gathering dust and generating clutter.”

    I definitely need to do more of that in my physical life, but this weekend I embarked on it in my digital one. Like most people, when I finally had iTunes and no longer had an actually “how full are my shelves?” physical limit, I started hoarding music. I had a lot of stuff from my old CD collection, music I’d bought from iTunes, the StillStream library from when I was maintaining the music library for that station’s ambient robot, music from friends who’d lent me CDs, stuff I’d borrowed from the library and timeshifted into iTunes to listen to “later”, free releases from Amazon…basically a huge pile of stuff. Worse, I’d put all this in iTunes Match, so even if I cleaned out my library, turning iTunes Match on again would just put all the crud back.

    In addition, my partner didn’t have a music library at all because her internal disk on her laptop was too small to keep all of her work and optional stuff as well. There was an offline copy of her old music library, and it too had also grown over the years from music lent to her, music I thought she might like, etc. She wanted to be able to pack up her CD collection and put it into storage, and maybe get rid of some of it as well. So we needed to take our old libraries and clean out anything that we didn’t want, and then see what each other might have that the other person might want afterward.

    I spent a couple evenings last week ripping the CDs she didn’t have online yet into a separate library, so they wouldn’t be part of the existing mess, and then went through and did the following in a brand new library:

    • Anything she actually owned got copied in. iPhoto’s ability to let me photograph the discs on the shelf and copy the text off of them came in very handy to make sure i got them all.
    • Anything I didn’t find in the library on that pass got ripped into this new library.
    • The not-previously ripped CDs in the secondary library were copied in.

    At this point, she had a clean “definitely mine” library. Now it was time to clean mine up. I had done one pass already to strip it down, but I wanted to make sure that I both cleaned out my iTunes Match library and made a conscious decision, “keep or not” for anything in there that I didn’t already have in the stripped-down library.

    The easiest way to do this was to to create a brand new, empty library, and connect that to iTunes Match, after turning on the “I want lossless copies” option — this is apparently new in Ventura, and is very welcome. Once this synced up, I could download and copy in only things I knew I wanted to keep. This meant I would actually have to look at the music and say, “do I really want to listen to this again?”, but not having to pull it out of an existing library would help.

    In addition, my partner had asked me to give her a copy of music of mine that I know she likes; we share a liking for world music, and several different other artists. After a little thought, I came up with the following:

    • There’s probably music in iTunes Match that we both want, and there’s definitely music I want. So let’s do this:
      • Create a new folder on a scratch disk that will contain music to add to her library.
      • Do the same for music I want to add to mine.
      • Drag those into the favorites in the finder.
      • Drag the Media folder from my target library to the sidebar as well. This will let me quickly check to see if a given release is already in my library , and if it is I can skip downloading it altogether, unless I want to give my partner a copy.
      • As I process each release in the Match library, I do the following:
        • If my partner would like it, download it.
        • If I want to keep it myself, open a Finder window using the Media folder shortcut and check if I have it.
          • If I do, simply delete it from the iTunes Match library (which also takes it out of iTunes Match).
          • If I don’t, download it.
        • If I downloaded it, right-click on one track in the iTunes list, and “Show in Finder”. This pops up a new Finder window with all the tracks for the release in it.
        • Command-Click on the folder name in the top bar of the window and go up one level to see the release in its enclosing folder.
        • Drag the release folder to the sidebar aliases for the “music to add” folders as appropriate.
        • Delete the tracks in iTunes. This removes them from the iTunes Match library, and iTunes Match as well.

    This took the better part of two days to finish, but I now have two cleaned-up music libraries, and an empty iTunes Match. I am considering whether to retain iTunes Match, mostly because it’s not a “backup” — it’s just a convenient way to share music across my devices, and doesn’t guarantee I’ll get the original file back.

    I’ve probably lost fidelity on some of the tracks I added to Match, and it’s possible some of them now have DRM. I will do another pass at some point and see; I’m not sure if it really makes a lot of difference to me right now, but I can always play them through Audio Hijack and re-record them to remove the DRM if I decide I want to.

    We also wanted a list of “what you have that I don’t” for both the final libraries; I was able to do that with Google Sheets, but I’ll post that as a separate article.

  • Show report: 2020-10-31 “Pharoah Nuff” at radiospiral.net

    My last performance was not as smooth as I hoped, so this time I decided that I would find a way to streamline it even further.

    I decided to go further in the direction I’d taken with the Wizard of Hz show, and strip down even more. I decided to try to perform as much as possible of the set on the iPad, and use the laptop solely for streaming and Second Life. This freed me from hassles in switching setups in VCVRack, Live, and the other software I’d been using, but it also meant that I wouldn’t be using either of my favorite synths for this performance (the Arturia 2600 and Music Easel).

    Having had some time between performances to really experiment with AUM and I felt comfortable using it to lay out my performance. I decided that I wanted to keep Scape as my background/comping program, and that I’d set up a series of light-handed scapes to give me a through-line. I then sat down with MIRack and Ripplemaker to create multiple Krell textures that I could bring in and out, and also discovered a couple of lovely lead patches for Ripplemaker that I paired with a Kosmonaut looper. I also brought in a couple public-domain samples from old sci-fi movies, heavily processed with Kosmonaut again, and felt like I had enough material to do an hour’s performance.

    I used the iConnect Audio4+, which I now finally have the hang of, and set it up so that I had two stereo channels from the iPad and one mono channel routed to the iPad through Kosmonaut (again!) for some subtle reverb when I was doing my intro and outro. With the setup I used, the iConnect kept the iPad fully charged through the whole set.

    I used Loopback to connect the multiple outs from the iConnect to the stereo ins on my Mac, and monitored on headphones. I pulled up Audio Hijack, entered the stream setup, and was ready to broadcast.

    I got up early on the day, started up AUM, and ran a soundcheck to make sure everything was working. All sounded good, and I was good to go.

    Mostly.

    I didn’t stop AUM, and as a result, it ran for several hours before I tried to start using it. This apparently triggered some kind of a memory shortage, and when I started streaming, I was completely mute. Fortunately, I’d cued up a prerecorded VCVRack texture, and started that while I was trying to figure out what was wrong. I gave up and restarted the iPad, and AUM came up like a champ.

    After that it was pretty smooth. I was able to fade the various patches in and out, play the sci-fi samples, and improvise over the Scape-provided background. Once it was off the ground, the performance was very easy to do. I did forget and leave the audio feed from Second Life enabled, so as a result this was a very sparse performance, but the sparseness worked out very well.

    Overall this was a great way to do a performance and I plan to refine this further. Of particular note is that AUM saves things so well that it will be trivially easy to do this performance again, should I decide to; this is probably the first time I’ve had a performance setup I felt was robust enough to say that!

  • RadioSpiral Wizard of Hz Performance Notes

    Last time I did a live streaming performance for an audience, it did not go well. I had long pauses, the mic didn’t work, and miscommunication over Slack to the remote venue resulted in my getting cut off before my set was finished. And this was even after a good bit of practice.

    So when I signed up for the Wizard of Hz concert on RadioSpiral, decided that I needed to have as much backstop as possible in place so that no matter how tangled up I got mentally, I’d have a fallback to something that sounded good and would be a nice navigable arc from point A to point B. Ideally, I should have something that would sound great even if I got called away for the entire set!

    My go-to process for this is Scape. I’ve had it since it first came out, and it meshes very well with what I enjoy hearing and enjoy playing. I started off with the Scape playlist that I often use to relax and get to sleep; this is a seven-scene playlist, with the transition time at max, with the per-scene time adjusted to be just a bit over an hour. This gives me a fallback for the whole hour; I can pull everything else back and lean on Scape while I decide what the next section should be.

    In addition, Scape provides a very nice backdrop to improvise over, so I can be playing something while Scape gives me a framework.

    I then put together a couple of Ableton Live sets: one built on the Arturia ARP 2600 and Buchla Music Easel emulations, and another built on Live’s really nice grand piano and the open-source OB-Xa emulator, the OB-Xd. I finally figured out how to change patches on the OB-Xd about 20 minutes before showtime.

    I had set up a piano with a nice looping effect from Valhalla Supermassive (Supermassive and Eventide Blackhole figured heavily in the effects), but ended up not using it, and doing a small Launchpad set instead using the Neon Lights soundpack.

    I was also able to open and close with the large singing bowl, played live and processed through the Vortex, which was a nice real analog performance touch.

    Overall, I strove for a set that sounded played-through, but that had enough breathing room that I could fall back on Scape while making changes (switching Live sets, etc.), and I think I achieved that.

    I did have Audio Hijack recording the set, so if it sounds OK, I’ll be releasing it on Bandcamp. (Followup: it came out pretty well! Definitely at least an EP.)

    Only real issue was a partially-shorted cable between my iPhone and the mixer that I didn’t figure out until most of the way through the set.

  • The Harp of New Albion’s Tuning for Logic

    The Disquiet Junto is doing an alternate tunings prompt for week 0440 (very apropos!).

    I’ve done several pieces before using Balinese slendro and pelog tuning, most notably Pemungkah, for which this site is named. I wanted to do something different this time, using Terry Riley’s tuning from The Harp of New Albion, using Logic Pro’s project tuning option.

    The original version was a retuning of a Bosedorfer grand to a modified 5-limit tuning:

    However, Logic’s tuning feature needs two things to use a tuning with it:

    • Logic’s  tuning needs to be based on C, not C#
    • The tuning has to be expressed as cents of detuning from the equal-tempered equivalent note.

    This leads one to have to do quite a number of calculations to put this in a format that Logic will like.

    (more…)

  • A Variation on “Clouds of Europa”

    I’m still learning the ins and outs of VCVRack; there are so many interesting modules available, and so many different possible directions to go in!

    I’m starting to lean toward something in the middle ground between Berlin School sequencing and complete wacked-out crazy, searching for an ambient location somewhere in that space. Jim Frye posted a video of his beautiful “Clouds of Europa” patch on the VCVRack forums yesterday, and I transcribed it from the video to see how it works. After some experimentation, I tweaked the settings of the macro oscillators and added a fourth one, put envelopes on them to add some more air, added some LFO action to vary the sound a bit, and lengthened the delay time to add some more texture to the bass.

    I will probably revisit this patch and change over the Caudal to the Turing Machine and see what I can do with that as the source of randomness to feed Riemann, but I’m very happy with the result so far.

  • Recovering my old Scape files

    My original iPad finally bit the dust in August, just before I could get a final good backup of it. Most of the stuff on it was already backed up elsewhere (GMail, Dropbox, iCloud), but Scape was the exception.

    Scape is (at least not yet) able to back up its files to the cloud, so there wasn’t anyplace else to restore from — except I had take advantage of the fact that under iOS5, the files in the app were still directly readable using Macroplant’s iExplorer, so I had actually grabbed all the raw Scape files and even the Scape internal resources. Sometime I’ll write up what I’ve figured out about Scape from those files…

    The Scape files themselves are just text files that tell Scape what to put on the screen and play, so the files themselves were no problem; they don’t include checksums or anything that would make them hard to work with.


    Version:0.20
    Mood:7
    Date:20121113025954
    Separation:0.50
    HarmonicComplexity:0.50
    Mystery:0.50
    Title:Scape 117
    Steam Factory,0.50,0.50,1.0000
    Spirit Sine Dry,0.23,0.31,3.1529
    Spirit Sine Dry,0.40,0.36,3.4062
    Spirit Sine Dry,0.64,0.19,3.9375
    Spirit Sine Dry,0.55,0.49,1.0065
    Spirit Sine Dry,0.26,0.67,3.5039
    Spirit Sine Dry,0.76,0.54,3.1211
    Spirit Sine Dry,0.49,0.79,3.8789
    Spirit Sine Dry,0.46,0.17,3.9766
    Spirit Sine Dry,0.85,0.27,2.0732
    Spirit Sine Dry,0.90,0.53,1.5154
    Spirit Sine Dry,0.66,0.72,3.6680
    Spirit Sine Dry,0.15,0.55,2.2527
    Spirit Sine Dry,0.11,0.80,1.9320
    Spirit Sine Dry,0.32,0.88,4.1289
    Spirit Sine Dry,0.18,0.14,3.2779
    Spirit Sine Dry,0.81,0.11,3.0752
    Spirit Sine Dry,0.49,0.56,1.7528
    Spirit Sine Dry,0.82,0.80,3.3783
    Bass Pum,0.53,0.46,1.8761
    Thirds Organ Pulsar Rhythm,0.50,0.50,1.0000
    End

    I wrote to Peter Chilvers, who is a mensch, and asked if there was any way to just import these text files. He replied that there unfortunately wasn’t, but suggested that if I still had access to a device that had the scapes on it, I could use the share feature and mail them one by one to my new iPad, where I could tap them in Mail to open them in Scape and then save them.

    At first I thought I was seriously out of luck, but then I figured, why not share one from the new iPad and see what was in the mail? I did, and found it was just an attachment of the text file, with a few hints to iOS as to what app wanted to consume them:


    Content-Type: application/scape; name="Scape 10";x-apple-part-url=Scape 10ar; name="Scape 10ar.scape"
    Content-Disposition: inline; filename="Scape 10ar.scape"
    Content-Transfer-Encoding: base64

    Fab, so all I have to do is look through five or six folder containing bunches of scape files that may or may not be duplicates, build emails, and…this sounds like work. Time to write some scripts. First, I used this script to ferret through the directories, find the scapes, and bring them together.


    use strict;
    use warnings;
    use File::Find::Rule;

    my $finder = File::Find::Rule->new;
    my $scapes = $finder->or(
    $finder->new
    ->directory
    ->name(‘Scape.app’)
    ->prune
    ->discard,
    $finder->new
    ->name(‘*_scape.txt’)
    );
    my $seq=”a”;
    for my $scape ($scapes->in(‘.’)) {
    (my $base = $scape) =~ s/_scape.txt//;

    my $title;
    open my $fh, “<“, $scape or die “can’t open $scape: $!”;
    while(<$fh>){
    chomp;
    next unless /Title:(.*)$/;
    $title = $1;
    last;
    }
    $title =~ s[/][\\/]g;
    if (-e “$title.scape”) {
    $title = “$title$seq”;
    $seq++;
    die if $seq gt “z”;
    }
    system qq(mv “$scape” “$title.scape”);
    system qq(mv “$base.jpg” “$title.jpg”)
    }

    I decided it was easier to do a visual sort using the .jpg thumbnails to spot the duplicates and filter them out; I probably could have more easily done it by checksumming the files and eliminating all the duplicates, but I wanted to cull a bit as well.

    So now I’ve got these, and I need to get them to my iPad. Time for another script to build me the mail I need:

    #!/usr/bin/env perl

    =head1 NAME

    bulk_scapes.pl – recover scape files in bulk

    =head1 SYNOPSIS

    MAIL_USER=gmail.sendername@gmail.com \
    MAIL_PASSWORD=’seekrit’ \
    RECIPENT=’icloud_user@me.com’ \
    bulk_scapes

    =head1 DESCRIPTION

    C will collect up all the C<.scape> files in a directory
    and mail them to an iCloud user. That user can then open the mail on their
    iPad and tap the attachments to restore them to Scape.

    This script assumes you’ll be using GMail to send the files; create an app
    password in your Google account to use this script to send the mail.

    =cut

    use strict;
    use warnings;
    use Email::Sender::Simple qw(sendmail);
    use Email::Sender::Transport::SMTP;
    use MIME::Entity;

    my $top = MIME::Entity->build(Type => “multipart/mixed”,
    From => $ENV{MAIL_USER},
    To => $ENV{RECIPIENT},
    Subject => “recovered scapes”);

    # Loop over files and attach. MIME type is ‘application/scape’.
    my $n = 1;
    for my $file (`ls -1 *.{scape,playlist}`) {
    chomp $file;
    my($part, undef) = split /\./, $file;
    open my $fh, “<“, $file or die “Can’t open $file: $!\n”;
    my $name;
    while(<$fh>){
    next unless /Title/;
    (undef, $name) = split /:/;
    last;
    }
    unless ($name) {
    $name = “Untitled $n”;
    $n++;
    }
    close $fh;
    $top->attach(Path => $file,
    Type => “application/scape; name=\”$name\”;x-apple-part-url=$part”,
    );
    }

    my $transport = Email::Sender::Transport::SMTP->new(
    host => ‘smtp.gmail.com’,
    port => 587,
    ssl => ‘starttls’,
    sasl_username => $ENV{MAIL_USER},
    sasl_password => $ENV{MAIL_PASSWORD},
    );

    sendmail($top, { transport => $transport });

    I was able to receive this on my iPad, tap on the attachments, and have them open in Scape. Since there were a lot of these, it took several sessions over a week to get them all loaded, listened to, saved, and renamed using Scape’s edit function (the titles did not transfer, unfortunately).

    So now I have all my Scapes back, and I’m working through the program, trying to get to the point where I have all the objects enabled again. I haven’t played with it in a while, and I’m glad to be rediscovering what a gem this app is.

  • New album released: Radio Free Krakatau

    Radio Free Krakatau
    Composed and performed entirely in VCVRack.

    Based on a picture of a VCVRack setup I saw on Facebook; I was able to figure some of the connections and setup, but not all of it; this is a record of my explorations of that set of modules, as I increased the complexity of the interconnections.

    Sadly, the VCVRack savefiles were lost, so this is the only record of this performance.