Blog

  • Azuracast high-frequency updates, SSE, and iOS background processes

    A big set of learning since the last update.

    I’ve been working on getting the RadioSpiral infrastructure back up to snuff after our Azuracast streaming server upgrade. We really, really did need to do that — it just provides 90% of everything we need to run the station easily. Not having to regenerate the playlists every few weeks is definitely a win, and we’re now able to do stuff like “long-play Sunday”, where all of the tracks are long-players of a half-hour or more.

    But there were some hitches, mostly in my stuff: the iOS app and the now-playing Discord bot. Because of reasons (read: I’m not sure why), the Icecast metadata isn’t available from the streaming server on Azuracast, especially when you’re using TLS. This breaks the display of artist and track on the iOS app, and partially breaks the icecast-monitor Node library I was using to do the now-playing bot in Discord.

    (Side note: this was all my bright idea, and I should have tested the app and bot against Azuracast before I proposed cutting over in production, but I didn’t. I’ll run any new thing in Docker first and test it better next time.)

    Azuracast to the rescue

    Fortunately, Azuracast provides excellent now-playing APIs. There a straight-up GET endpoint that returns the data, and two event-driven ones (websockets and SSE). The GET option depends on you polling the server for updates, and I didn’t like that on principle; the server is quite powerful, but I don’t want multiple copies of the app hammering it frequently to get updates, and it was inherently not going to be close to a real-time update unless I really did hammer the server.

    So that was off the table, leaving websockets and SSE, neither of which I had ever used. Woo, learning experience. I initially tried SSE in Node and didn’t have a lot of success with it, so I decided to go with websockets and see how that went.

    Pretty well actually! I was able to get a websocket client running pretty easily, so I decided to try it that way. After some conferring with ChatGPT, I put together a library that would let me start up a websocket client and run happily, waiting for updates to come in and updating the UI as I went. (I’ll talk about the adventures of parsing Azuracast metadata JSON in another post.)

    I chose to use a technique that I found in the FRadioPlayer source code, of declaring a public static variable containing an instance of the class; this let me do

    import Kingfisher
    import ACWebSocketClient
    
    client = ACWebSocketClient.shared
    ...
    tracklabel.text = client.status.track
    artistlabel.text = client.status.artist
    coverImageView.kf.getImage(with:client.status.artURL)

    (Kingfisher is fantastic! Coupled with Azuracast automatically extracting the artwork from tracks and providing a URL to it, showing the right covers was trivial. FRadioPlayer uses the Apple Music cover art API to get covers, and given the, shall we say, obscure artists we play, some of the cover guesses it made were pretty funny.)

    Right. So we have metadata! Fantastic. Unfortunately, the websocket client uses URLSessionWebSocketTask to manage the connection, and that class has extremely poor error handling. It’s next to impossible to detect that you’ve lost the connection or re-establish it. So It would work for a while, and then a disconnect would happen, and the metadata would stop updating.

    Back to the drawing board. Maybe SSE will work better in Swift? I’ve written one client, maybe I can leverage the code. And yes, I could. After some searching on GitHub and trying a couple of different things, I created a new library that could do Azuracast SSE. (Thank you to LaunchDarkly and LDSwiftEventSource for making the basic implementation dead easy.)

    So close, but so far

    Unfortunately, I now hit iOS architecture issues.

    iOS really, really does not want you to run long-term background tasks, especially with the screen locked. When the screen was unlocked, the metadata updates went okay, but as soon as the screen locked, iOS started a 30-second “and what do you think you’re doing” timer, and killed the metadata monitor process.

    I tried a number of gyrations to keep it running and schedule and reschedule a background thread, but if I let it run continuously, even with all the “please just let this run, I swear I know what I need here” code, iOS would axe it within a minute or so.

    So I’ve fallen back to a solution not a lot better than polling the endpoint: when the audio starts, I start up the SSE client, and then shut it down in 3 seconds, wait 15 seconds, and then run it again. When audio stops, I shut it off and leave it off. This has so far kept iOS from nuking the app, but again, I’m polling. Yuck.

    However, we now do have metadata, and that’s better than none.

    On the other hand…

    On the Discord front, however, I was much more successful. I tried SSE in Node, and found the libraries wanting, so I switched over to Python and was able to use sseclient to do the heavy lifting for the SSE connection. It essentially takes an SSE URL, hooks up to the server, and then calls a callback whenever an event arrives. That was straightforward enough, and I boned up on my Python for traversing arbitrary structures — json.loads() did a nice job for me of turning the complicated JSON into nested Python data structures.

    The only hard bit was persuading Python to turn the JSON struct I needed to send into a proper query parameter. Eventually this worked:

    subs = {
            "subs": {
                f"station:{shortcode}": {"recover": True}
            }
         }
    
    json_subs = json.dumps(subs, separators=(',', ':'))
    json_subs = json_subs.replace("True", "true").replace("False", "false")
    encoded_query = urllib.parse.quote(json_subs)

    I pretty quickly got the events arriving and parsed, and I was able to dump out the metadata in a print. Fab! I must almost be done!

    But no. I did have to learn yet another new thing: nonlocal in Python.

    Once I’d gotten the event and parsed it and stashed the data in an object, I needed to be able to do something with it, and the easiest way to do that was set up another callback mechanism. That looked something like this:

    client = build_sse_client(server, shortcode)
    run(client, send_embed_with_image)

    The send_embed_with_image callback puts together a Discord embed (a fancy message) and posts it to our Discord via a webhook, so I don’t have to write any async code. The SSE client updates every fifteen seconds or so, but I don’t want to just spam the channel with the updates; I want to compare the new update to the last one, and not post if the track hasn’t changed.

    I added a method to the metadata object to compare two objects:

    def __eq__(self, other) -> bool:
        if not isinstance(other, NowPlayingResponse):
            return False
        if other is None:
            return False
        return (self.dj == other.dj and
                self.artist == other.artist and
                self.track == other.track and
                self.album == other.album)

    …but I ran into a difficulty trying to store the old object: the async callback from my sseclient callback couldn’t see the variables in the main script. I knew I’d need a closure to put them in the function’s scope, and I was able to write that fairly easily after a little poking about, but even with them there, the inner function I was returning still couldn’t see the closed-over variables.

    The fix was something I’d never heard of before in Python: nonlocal.

    def wrapper(startup, last_response):
        def sender(response: NowPlayingResponse):
            nonlocal startup, last_response
            if response == last_response:
                return
    
            # Prepare the embed data
            local_tz = get_localzone()
            start = response.start.replace(tzinfo=local_tz)
            embed_data = {
                "title": f"{response.track}",
                "description": f"from _{response.album}_ by {response.artist} ({response.duration})",
                "timestamp": start,
                "thumbnail_url": response.artURL,
            }
    
            # Send to webhook
            send_webhook(embed_data)
    
            startup = False
            last_response = response
    
        return sender

    Normally, all I’d need to do would be have startup and last_response in the outer function’s argument list to have them visible to the inner function’s namespace, but I didn’t want them to just be visible: I wanted them to be mutable. Adding the nonlocal declaration of those variables does that. (If you want to learn more about nonlocal, this is a good tutorial.)

    The Discord monitor main code now looks like this:

    startup = True
    last_response = None
    
    # Build the SSE client
    client = build_sse_client(server, shortcode)
    
    # Create the sender function and start listening
    send_embed_with_image = wrapper(startup, last_response)
    run(client, send_embed_with_image)

    Now send_embed_with_image will successfully be able to check for changes and only send a new embed when there is one.

    One last notable thing here: Discord sets the timestamp of the embed relative to the timezone of the Discord user. If a timezone is supplied, then Discord does the necessary computations to figure out what the local time is for the supplied timestamp. If no zone info is there, then it assumes UTC, which can lead to funny-looking timesstamps. This code finds the timezone where the monitor code is running, and sets the timestamp to that.

    from tzlocal import get_localzone
    
    local_tz = get_localzone()
    start = response.start.replace(tzinfo=local_tz)

    And now we get nice-looking now-playing info in Discord:

    Shows two entries in a Discord channel, listing track title in bold, album name in italics, and artist name, with a start time timestamp and a thumbnail of the album cover.

    Building on this

    Now that we have a working Python monitor, we can now come up with a better solution to (close to) real-time updates for the iOS app.

    Instead of running the monitor itself, the app will register with the Python monitor for silent push updates. This lets us offload the CPU (and battery) intensive operations to the Python code, and only do something when the notification is pushed to the app.

    But that’s code for next week; this week I need to get the iOS stopgap app out, and get the Python server dockerized.

  • Swift Dependency Management Adventures

    I’m in the process of (somewhat belatedly) upgrading the RadioSpiral app to work properly with Azuracast.

    The Apple-recommended way of accessing the stream metadata just does not work with Azuracast’s Icecast server – the stream works fine, but the metadata never updates, so the app streams the music but never updates the UI with anything.

    Because it could still stream (heh, StillStream) the music, we decided to go ahead and deploy. There were so many other things that Azuracast fixed for us that there was no question that decreasing the toil for everyone (especially our admin!) was going to make a huge difference.

    Addressing the problem

    Azuracast supplies an excellent now-playing API in four different flavors:

    • A file on the server that has now-playing data, accessible by simply getting the contents of the URL. This is only updated every 30 seconds or so, which isn’t really good enough resolution, and requires the endpoint be polled.
    • An API that returns the now-playing data as of the time of the request via a plain old GET to the endpoint. This is better but still requires polling to stay up to date, and will still not necessarily catch a track change unless the app polls aggressively, which doesn’t scale well.
    • Real-time push updates, either via SSE over https or websocket connection. The push updates are less load on the server, as we don’t have to go through session establishment every time; we can just use the open connection and write to it. Bonus, the pushes can happen at the time the events occur on the server, so updates are sent exactly when the track change occurs.

    I decided that the websocket API was a little easier to implement. With a little help from ChatGPT to get me an initial chunk of code (and a fair amount of struggling to figure out the proper parameters to send for the connection request),

    I used a super low-rent SwiftUI app to wrap AVAudioSession and start up a websocket client separately to manage the metadata; that basically worked and let me verify that the code to monitor the websocket was working.

    I was able to copy that code inside of FRadioPlayer, the engine that RadioSpiral uses to do the streaming, but then I started running into complications.

    Xcode, Xcode, whatcha gonna do?

    I didn’t want to create an incompatible fork of FRadioPlayer, and I felt that the code was special-purpose enough that it wasn’t a reasonable PR to make. In addition, it was the holidays, and I didn’t want to force folks to have to work just because I was.

    So I decided to go a step further and create a whole new version of the FRadioPlayer library, ACRadioPlayer, that would be specifically designed to be used only with Azuracast stations.

    Initially, this went pretty well. The rename took a little extra effort to get all the FRadio references switched over to ACRadio ones, but it was fairly easy to get to a version of the library that worked just like FRadioPlayer, but renamed.

    Then my troubles began

    I decided that I was going to just include the code directly in ACRadioPlayer and then switch RadioSpiral to the new engine, so I did that, and then started trying to integrate the new code into ACRadioPlayer. Xcode started getting weird. I kept trying to go forward a bit at a time — add the library, start trying to include it into the app, get the fetch working…and every time, I’d get to a certain point (one sample app working, or two) and then I’d start getting strange errors: the class definition I had right there would no longer be found. The build process suddenly couldn’t write to the DerivedData directory anymore. I’d git reset back one commit, another, until I’d undone everything. Sometimes that didn’t work, and I had to throw away the checkout and start over. The capper was “Unexpected error”, with absolutely nothing to go on to fix it.

    Backing off and trying a different path

    So I backed all the way out, and started trying to build up step-by-step. I decided to try building the streaming part of the code as a separate library to be integrated with ACRadioPlayer, so I created a new project, ACWebSocketClient, and pulled the code in. I could easily get that to build, no surprise, it had been building, and I could get the tests of the JSON parse to pass, but when I tried to integrate it into ACRadioPlayer using Swift Package Manager, I was back to the weird errors again. I tried for most of a day to sort that out, and had zero success.

    The next day, I decided that maybe I should follow Fatih’s example for FRadioPlayer and use Cocoapods to handle it. This went much better.

    Because of the way Cocoapods is put together, just building the project skeleton actually gave me some place to put a test app, which was much better, and gave me a stepping stone along the way to building out the library. I added the code, and the process of building the demo showed me that I needed to do a few things: be more explicit about what was public and what was private, and be a little more thoughtful about the public class names.

    A couple hours work got me a working demo app that could connect to the Azuracast test station and monitor the metadata in real time. I elected to just show the URL for the artwork as text because actually fetching the image wasn’t a key part of the API.

    I did then hit the problem that the demo app was iOS only. I could run it on MacOS in emulation mode, but I didn’t have a fully-fledged Mac app to test with. (Nor did I have a tvOS one.) I tried a couple variations on adding a new target to build the Mac app, but mostly I ended up breaking the work I had working, so I eventually abandoned that.

    I then started working step by step to include the library in ACRadioPlayer. FRadioPlayer came with an iOS apps (UIKit and SwiftUI), a native Mac app, and a tvOS app. I carefully worked through getting the required versions of the OS to match in the ACWebSocketClient podspec, the ACRadioPlayer Podfile, and the ACRadioPlayer Xcode project. That was tedious but eventually successful.

    Current status

    I’ve now got the code properly pulled in, compatible with the apps, and visible to each of the apps. I’ll now need to pull in the actual code that uses it from the broken repo (the code was fine, it was just the support structures around it that weren’t) and get all the apps working. At that point I can get both of the libraries out on Cocoapods, and then start integrating with RadioSpiral.

    In general, this has been similar to a lot of projects I’ve worked on in languages complex enough to need an IDE (Java, Scala, and now Swift): the infrastructure involved in just getting the code to build was far more trouble to work with and maintain, and consumed far more time, than writing the code itself.

    Writing code in Perl or Python was perhaps less flashy, but it was a lot simpler: you wrote the code, and ran it, and it ran or it didn’t, and if it didn’t, you ran it under the debugger (or used the tests, or worse case, added print statements) and fixed it. You didn’t have to worry about whether the package management system was working, or if something in the mysterious infrastructure underlying the applications was misconfigured or broken. Either you’d installed it, and told your code to include it, or you hadn’t. Even Go was a bit of a problem in this way; you had to be very careful in how you got all the code in place and that you had gotten it in place.

    Overall, though, I”m pretty happy with Cocoapods and the support it has built in. Because FRadioPlayer was built using Cocoapods as its package management, I’m hoping that the process of integrating it into RadioSpiral won’t be too tough.

  • So what am I doing now? 2024 edition

    After my sudden layoff from ZipRecruiter in 2023, I decided that I needed to step back and think about things. The job market was (and end of 2024, remains) abysmal. I did a couple interviews but me and Leetcode don’t get along, and I honestly am not convinced that watching me attempt to code under utterly unrealistic time constraint is a really goofy way to see if I can write good, maintainable code on a schedule.

    So after about 3 months of that, I decided that I would look at my options and see what I could do that wasn’t necessarily just another programming job.

    I’m currently doing a number of things, some of which are bringing in income, though not lots of it, and others which are moving other parts of my life ahead.

    • I auditioned for, and got, a job as one of the editors for the Miskatonic University Podcast. I’ve certainly been doing audio editing for a long time; seemed only reasonable to get paid for it. Podcast editing is a detail-oriented task, and those are the kind I enjoy. It’s a real pleasure to take the raw audio and produce a professional result. Dave and Bridgett are, of course, very professional themselves and make the job considerably easier than it could be, but the audio still needs that attention that cleans up the dead space, removes the pauses and um’s and er’s, tidily clips out those small flubs, and turns out something that is a pleasure to listen to. And I get to use my cartoon sound effects library!
    • I’ve edited a Call of Cthulhu scenario and from that have a repeat customer for whom I’m now editing a full game manual. This is exceptionally pleasant though intense work. I’ve been able to help with making the prose sing, clarifying, and prompting for how the author can make the product better. I think this is developmental editing plus line edits and maybe collaboration, and honestly I think I may be undercharging significantly, but I want to get a few successful edits into my portfolio before I start asking for more money.
    • I’m learning Swift 5 and SwiftUI. I had an all-hands-on-deck (okay, all-me-on-deck, I’m the only one working on it) moment last year with the RadioSpiral app – it had been working beautifully, and I had benignly neglected it for about 3 years…only to have Apple drop me a “hey, you quit updating this, so we’re gonna drop it if you don’t do an update in 90 days” email. So I had to bring it up to Swift 5 and Xcode 15 pronto. Some tamasha with “we don’t know if you’re allowed to stream this, prove it” from Apple Review was actually the hard part of getting it up, but I managed with a couple weeks to spare. (A lot of that was needing to noodge Mike to get me a “yes, I run the station, yes this is official, yes, we have permission” letter to upload. Requesting help from Apple Review after repeated rejections helped a ton because they couldn’t tell me exactly what the problem was, and me revising the code wasn’t going to work. I got a phone call, a clarification, and we were back in business.) Now looking at a new version using SwiftUI sometime soon.
    • Started working on replacing our old broadcast setup with Azuracast. We’ll probably switch over before the end of the year. Azuracast has a ton of stuff that we really want and will let us simplify operations significantly. The APIs will net me pull in more info in the RadioSpiral app (notably the real current DJ and play history…up to a year!) We’re almost there.
    • Started working on several other Swift projects, details still under wraps until I’m done. At least one of the projects is a brand-new thing that I needed badly; I’m hoping that other people doing that same thing will realize they needed it too, but just didn’t think of it, and will buy a copy. Another is a niche thing which I think will be convenient to online writer’s critique groups, and one other is a special tide-clock app just for me that maybe others will enjoy too.
    • Because I’ve mostly forgone income this year, I’ll be able to roll over a chunk of money from the 401k to my Roth IRA. I’ll still need to pay taxes on it, but at least it will be now while my income is effectively zero and I can minimize the tax hit.

    Next year? Well, we’ll have to see.

    I did need some rest, badly; I was still fighting the combined MRSA/Eichenella corrodens infection (as featured on House; never have a disease featured on House) last year until 3 months after my layoff, and wasn’t clean until then. Spending the sabbatical learning things and seeing about options other than coding was useful, but I certainly wouldn’t mind a real income again.

    I’m planning to look at new things in the new year, but for now, I’m trying to finish off this year’s projects, get our retirement money on a good footing…and then we’ll see. I think I’ll need to pick up something with a dependable, above-poverty-level paycheck, but what that will be I don’t know.

  • OCLP experience update: back to Ventura

    I’ve been running OCLP (the Open Core Legacy Patcher) on my 2012 MacBook Pro; recently I ran softwareupdate from the command line and accidentally upgraded to Sonoma from Ventura. The experience was definitely mixed.

    It handled it mostly okay for day-to-day work. Xcode 15.4 ran fine. Where I hit a problem, though, was when I tried running Azuracast under Docker. The machine ran insanely hot, so hot that it started throwing screen glitches. Rather than burn out my GPU, I elected to downgrade to Ventura. Here’s how that went. (Spoiler: a lot of toil.)

    Getting Ventura back

    The first step was to get Ventura back on the machine, This wasn’t particularly hard; I just needed to follow the standard OCLP procedure, but install to a new partition on my internal SSD. This cut the amount of space down by about another 200GB, but went well. I was able to install and have Ventura in good shape in a couple hours.

    Retrieving the data from Sonoma

    Here’s where we started having problems.

    I had hoped that I’d be able to use Migration Assistant to bring the data back from Sonoma to Ventura, but no dice. Migration Assistant looked at the Sonoma disk, said, “nuh-uh, I ain’t downgrading” and refused to even consider mounting the disk. This meant I’d have to port everything back from that disk to the new one manually.

    My first try was to rsync it over. This failed because now I didn’t have enough space to have two copies of the data. I deleted the data from the Ventura install and tried again. This time I created a service account with admin privileges, and copied ~/Library over from Sonoma. This didn’t seem to work either; most particularly iCloud login was broken.

    Fixing the broken copy

    After thinking about it a while, I decided that the problem was probably permissions. From the service account, I wiped the Ventura copy of my account again, and copied in two steps. First, I copied ~/Library over, then chown‘ed it to my user on the Ventura disk. I logged in as myself, set up iCloud, and all was good. Now came the question of moving the data without filling the disk.

    I was able to use rsync (from the service account again), but this time I added --remove-source-files and --ignore-existing to the command. This copied only files I didn’t already have on Ventura from Sonoma, deleting them as they transferred. After this finished, I logged in to my Ventura account, iCloud was okay, and all my files were back.

    I then rebooted into the installer again, removed the Sonoma partitions, and was ready to go.

    I’m now currently running Azuracast under Docker, and having it ingest the RadioSpiral tracks from my iTunes library. It’s running warm, but not hot, and no more screen glitching. I’ll probably leave it on Ventura unless someone else running the same machine gets good performance from Sequoia.

    And I can always run Linux if all else fails.

  • Keep OCLP up to date, or recovering from an overenthusiastic software update

    I had the misfortune to have to learn this (and how to fix it), so I’m documenting it here for the next person who does this to themselves.

    We open on a Macbook Air 2012, updated to Sonoma 14.0 with OCLP. All is well. The machine runs…okay. It would probably be happier on Ventura, or Monterey, but because of reasons, I had updated it all the way to Sonoma so that there was no question of compatibility with the primary machine it was replacing while I was on travel.

    I used the machine on travel, and definitely found that it’s not quite up to the task at Sonoma, to be dealt with at a different time. The real issue was that I did not update OCLP to 1.5.0 as soon as that release was out. This meant that when Sonoma 14.5 was available and the machine auto-updated…it broke.

    Symptoms were that the trackpad and keyboard worked right up until login completed, and then did not work at all. Couldn’t run the browser to download the OCLP update, nada.

    Normally, I’d shrug, erase the disk, and reinstall, but this was a bit of an issue because I had files that I wanted to get off this device. (Yes, I know, I should have had backups, but I worked on the plane while I had no internet, and I hadn’t had the machine up long enough for Backblaze to finish a new incremental before the software update ran.)

    I tried a number of things: safe mode, doing a reinstall from Internet Recovery (we’d like to install El Capitan! sorry, your disk isn’t usable because I don’t recognize this filesystem)…and got nowhere. This was beginning to look bad.

    Then I remembered I had a Carbon Copy Cloner backup on one of my externals. Hm. Thought this was the Air’s Sonoma, but seems to be Mojave from my Macbook Pro 2012. Trying to boot it can’t be worse than what’s going on now, so booted, held down option, and there was “Mojave” in the picker list. Chose it, crossed my fingers…and it booted!

    I was able to download the latest OCLP (1.5.0), install it, run it, reinstall OCLP to the Air’s disk, and most importantly, reinstall the root patches. After that it was clear sailing: I shut down, restarted from the Air’s internal disk, and I was back in business on Sonoma 14.5.

    The primary, most important lesson: run OCLP periodically and make sure it’s up to date! If I had done that as soon as I got home, the 14.6 upgrade would have Just Happened and everything would have been fine.

    The secondary, also important lesson: disable automatic updates on your OCLP machines, and don’t update until you’ve verified that the most recent OCLP is installed and handles the version of the OS that you’ll be installing manually when you’re ready.

    The third lesson: after you have a working install of whatever OS with OCLP, make a bootable backup immediately. If I’d had that to hand, it would have take 15-20 minutes to fix the issue. As it was, I spent almost a full afternoon trying to fix the installation before trying the Mojave backup that wasn’t even for the affected machine. (I think I used up my luck for a couple months on that one.)

  • OCLP experiences on a 2012 MacBook Pro

    TL;DR

    OCLP works fine, if you don’t forget your damn firmware password. If you did, persist even if the Apple Store tells you your machine is “obsolete”. Mobile Kangaroo San Jose rules. Oakridge Apple store, not so much.

    The history

    We’ve owned a 2012 MacBook Pro 15″ since about 2014, when Shymala finally outgrew her 2009 MacBook Air and needed a faster, bigger, and better machine. We chose the 2012 MBP because everything was still upgradable (memory and disk). She used it for a good six or seven years before she wanted to upgrade to something lighter and (most importantly) faster — OS and application upgrades had vastly slowed it down, and it ran hot most of the time.

    First upgrade

    The machine sat around for a couple years until I got let go from WhiteHat and I realized I had no personal computer at all. (Resulting in the loss of a lot of my personal files, sadly, because I did not learn the canonical lesson: a work computer is not “yours”.) So I got the machine out of storage, and yeah, it was slow, and not up to what I wanted to do with it. I upgraded the memory to the theoretical max (16GB, which it supports, just not officially), and swapped out the drive for a Crucial 2 TB SSD.

    It was like a brand new machine! It ran the then-current OS perfectly. It did run a bit hot sometimes, but it was fast enough to compete with her old machine, and nearly as good an experience as the new laptop from ZipRecruiter (also an Intel machine in the early days of my tenure).

    We had added a firmware lock to the machine because there was some concern about it getting stolen while Shymala was living in Brooklyn, and we wrote it down. Or so we thought. OS updates were installing, everything was fine up to Catalina. The machine was left behind on updates past that, but generally this wasn’t an issue, as it was still doing what was needed.

    The first stirrings of trouble were when Big Sur came out, and the new version of Xcode required it. This made the machine less useful by quite a bit. I could still use it for streaming and music production, and it ran Second Life fine; Photos worked, Acorn worked, so basically it was still great for everything but iOS development. I didn’t really need to do any development at the time, as the RadioSpiral app was working and stable, so I left it.

    Come 2023, I was laid off from ZipRecruiter. They were nice enough to let us all keep our laptops, and in the interim I’d gotten an M1 upgrade, so I was okay for staying up to date with the OS and Xcode.

    The scramble and the block

    This came in handy in October 2023, when I got a note from Apple that said, essentially, “dude, you’re not updating your app, and if you don’t do it now, we’re going to remove it. You have 90 days.”

    And I haven’t updated the app since Swift 3. Oops. I spent a couple weeks catching the app up to date and in the process I realized that I now had only one machine that I could do the work on. I needed to use Xcode 15, and the minimum OS was Ventura, two past Catalina. I was okay, because I had one machine that could run Xcode 15, but I thought I’d better see if I could come up with a backup. If something happened to the main machine, I was going to be SOL.

    Fortunately, Open Core Legacy Patcher was now available. We’d used it once successfully to update a 2015 Air all the way to Sonoma — ran Word and OBS beautifully, and that’s what we needed it for — but I didn’t want to waste the disk space it’d take to run Xcode on it (it only has a 256 GB SSD. On this machine, that is upgradeable, but I wasn’t feeling like doing the delicate surgery necessary, and it was really supposed to be dedicated to Shymala’s work while on travel. I am not a speedy iOS developer, and sharing a laptop is never a great experience.

    So now I needed to unlock the firmware on the 2012 Mac. At this point I discover that both I cannot remember it and all the records of what I think it is are wrong.

    I go to the Apple website, and check with Apple support on what my options are. They tell me I need the original receipt. Well. It’s 12 years later and multiple moves, and I definitely do not have a copy. Fortunately my tier-1 Apple support rep was able to push this up the chain and managed to find the purchase order in the archives. (Side note: Apple level 1 support reps — at least the ones on chat — rule.)

    Good, we’ve crossed that hurdle. I set up an appointment at the Oakridge Apple Store — they’re in the neighborhood, so they’re by far the easiest to work with — and took the machine in. The receipt was fine, and the tech tried a couple time to run the unlock software, and couldn’t get it to work. He declared that the machine was obsolete, and that Apple couldn’t help.

    Well. That was a bummer. I went home, and put the machine aside for a while. A couple months later, when it was clear I’d be traveling to Malaysia, I came back and said to myself, “okay, level 1 support was sure this would work. I should try again, but somewhere else.” I chatted with level 1 again, and my rep was enthusiastic about getting it unlocked. She scheduled a call for level 2 to call be back…and I missed the call because of another meeting. No problem, I thought, I’ll call back.

    So I call back. Level 1 phone support is not the same as level 1 chat support. I’m sure the rep was doing her job as she was supposed to, but essentially she blocked me from level 2, told me my machine was obsolete, and basically to buzz off and stop wasting her time.

    This seemed like a major set back but I had another option up my sleeve.

    A little bit previously, we’d had Shymala’s LED Cinema Display fail to come back on after a power surge, despite it being post the surge protector. We’d taken that to Oakridge, and they declared it dead, and that it’d have to be replaced. We decided to try an indie shop just to see if they could do something the Apple Store couldn’t. San Jose Mobile Kangaroo was the closest non-Apple store, and we figured that if they could fix it it’d definitely be better than spending $4K to replace the monitor, or take a chance on someone else’s used one. Their techs were able to get it reset and working again just fine in less than a day, and it didn’t seem like they’d had any trouble at all.

    So the firmware reset seemed like something to try them for. Worst case they couldn’t do it either, and I wouldn’t be in any worse shape. Took it in, and by golly, they were able to reset it right after Apple gave them the OK. (I suspect it was because they used Ethernet directly instead of via a USB dongle, which was how the Oakridge store tried it.) At any rate, I had a fixed machine. It did run me $125, but that’s a ton cheaper than buying another machine that could run newer OSes.

    OCLP experience

    OCLP was not seamless on the 2012 machine. On the 2015 machine, it was dead easy: download the installer for the OS, run OCLP to build the installer USB, boot from the installer, install, machine reboots itself a few times, done.

    On the 2012 Mac, it was…bumpy.

    The USB stick built fine, but when I booted, I ended up at the recovery screen. Tried in safe mode. Recovery screen. I tried a couple other different things and ended up crashing my Catalina install to the point that I’d broken the boot record on the HD and had to use Internet Recovery to reinstall Catalina.

    Okay, well. Not great. Got the machine back up and tried again, this time with Big Sur, as I though maybe I’d tried to go too far too fast…still back at the recovery screen. Well, what the hell. Let’s try recovery. Pick an account, password…and “Install Big Sur from USB”. Well, shit. I could have tried this before! Okay. Chose that option — and Big Sur starts installing, and succeeds! Woo hoo!

    Conclusion

    I’ve now rebuilt the Ventura installer and followed the instructions, going through recovery again, and Ventura is now installing on the 2012 Mac. I’m going to finish up, port everything from the M1 Mac over to the 2012 one, verify it’s all working, and then I can delete the old Catalina partitions and just use Ventura on the new machine. [Note: while writing this, we’re on the third reboot after the initial install, all seems to be going okay. Fourth boot while writing that sentence, but I’m pretty optimistic]

    I probably could have gone all the way to Sonoma, but I’m going to stay backlevel for now. My strategy on the 2012 Mac is going to be “update as little as possible other than security fixes” unless something pushes me forward (most likely Xcode).

    I’ll have my backup machine, and I’ll feel safe taking the M1 with me on travel — and if at some later point I can’t upgrade the Intel Mac further, it’ll work fine as a Linux or BSD machine now that it’s unlocked.

    Also: if I do a firmware lock again, that goes straight into 1Password, which would have prevented 90% of all these gyrations in the first place. $125 is a bit expensive to learn that lesson!

  • Leveraging an outage to build community and consensus

    We had our first extended outage at RadioSpiral this weekend, and I’m writing about it here to point out how a production incident can help bring a team together not only technically, but as a group.

    The timeline

    On Sunday evening, about an hour before Tony Gerber’s Sunday show, RadioSpiral went offline. Normally, the AirTime installation handles playing tracks from the station library when there’s no show, and it played a track…and then stopped. Dead air.

    The station has been growing; we’ve added two new DJs, doubling the number of folks who are familiar with servers, Linux, etc. Everyone who was available (pretty much everyone but our primary sysadmin, who set everything up and who is in the UK) jumped in to try to see what was up. We were able to log in to AirTime and see that it was offline, but not why; we tried restarting the streaming service, and the server itself, but couldn’t get back online.

    We did figure out that we could connect to the master streaming port so that Tony could do his show, but after that, we were off the air for almost 12 hours, until our primary sysadmin was up, awake, and had finished his work day.

    A couple hours of investigation on his part did finally determine that LetsEncrypt had added a RewriteRule to the Airtime configuration that forced all URLs to HTTPS; unfortunately it needs HTTP for its internal APIs and that switchover broke it. Removing the rule and restarting the server got us back on line, and our very patient and faithful listeners trickled back in over the day.

    Now what?

    While we’d not been able to diagnose and fix the problem, we had been chatting in the staff channel on the RadioSpiral Discord server, and considering the larger issues.

    RadioSpiral is expected to be up 24/7, but we’re really running it more like a hobby than a business. This is reasonable, because it’s definitely not making any of us money, at least not directly. (Things like sales of albums by our DJs, etc., are their business and not part of the station’s remit.) This means that we can have situations like this one, where the station could be offline for an extended amount of time without recourse.

    Secondarily, RadioSpiral is small. We have three folks who are the core of actual station operations, and their contributions are very much siloed. If something should happen to any one of the three of us, it would currently be a scramble to replace them and could possibly end up with an extended loss of that function, whether broadcast operations, the website, or community outreach and the app.

    So we started looking at this situation, and figuring out who currently owned what, and how we could start fixing the single points of failure:

    • Station operations are on an ancient Linux release
    • We’re running an unsupported and unmaintained version of Airtime. It can’t even properly reset passwords, a major problem in an outage if someone can’t get in.
    • The MacOS/iOS app is handled by one developer; if that person becomes unavailable, the app could end up deleted from the store if it’s not maintained.
    • The website is being managed by one person, and that person becomes unavailable…well, the site will probably be fine until the next time the hosting bill isn’t paid, but if there were any issues, we’d be SOL.
    • We do have documentation, but we don’t have playbooks or process for problem solving.
    • We don’t have anywhere that is a gathering point when there’s a problem.
    • We don’t have project tracking so we can know who’s doing what, who their backup or backups are, and where things are in process.
    • We don’t have an easily-maintained central repository of documentation.

    What we’re doing

    I took point on starting to get this all organized. Fixing all of the things above is going to take time and some sustained effort to accomplish, and we’re going to want to make sure that we have everything properly set up so that we minimize the number of failure points. Having everyone onboard is critical.

    • We’re going to move operations to a newer, faster, and cheaper server running a current LTS Ubuntu.
    • We’re going to upgrade from the old unsupported AirTime to the community-supported LibreTime.
    • We’re figuring out who could get up to speed on MacOS/iOS development and be ready to take over the app if something should happen that I couldn’t continue maintaining it. At the moment, we’re looking at setting up a process to bump the build number, rebuild with the most current Xcode, and re-release every six months or so to keep the app refreshed. Long-term we’ll need a second developer (at least) who can build and release the app, and hopefully maintain it.
    • We haven’t yet discussed what to do about the website; it is already a managed WordPress installation, so it should be possible to add one or more additional maintainers.
    • We are going to need to collect the docs we have somewhere that they can be maintained more easily. This could be in a shared set of Google docs, or a wiki; we’re currently leaning toward a wiki.
    • We need project tracking; there’s no need for a full-up ticketing process, at least yet. We think that Trello should do well enough for us.

    We have set up some new Discord channels to keep this conversation open: #production-incidents, to make tracking any new problems easier, and #the-great-migration, to keep channels open as we move forward in the migration to our new setup.

    Everyone is on board and enthusiastic about getting things in better shape, which is the best one could want. It looks good for RadioSpiral’s future. Admittedly we should have done this before a failure, but we’re getting it in gear, and that’s better than ignoring it!

  • Re-upping WebWebXNG

    So it’s been a minute since I did any serious work on WebWebXNG.

    Initially, I decided that the easiest way forward was “translate this old CGI code into modern web code”. And up to a point, that was a good way to go. But I got to the point where I was trying to make the rubber meet the road, and the intertwining of templating and code in the old version was making me stall out.

    I’ve had a breather, working on other projects, and the world has moved on and brought me some new things. One in particular is htmx.

    The htmx library works a whole lot more like the old CGI world did, just better. Everything is capable of interacting with the user, all of the HTTP verbs are available, and interaction is by exchanging chunks of HTML. You don’t convert to JSON, then convert back to HTML. This kind of logic definitely fits better with the concept of WebWebX as-it-was.

    Also, Perl project management has definitely changed — and improved. I did like Dist::Zilla, but it’s definitely a heavyweight solution. In the meantime, Minilla has appeared, and it fits incredibly well into the model I want to use to manage the code:

    • Your module is Pure Perl, and files are stored in lib.
    • Your executable file is in script directory, if there is one.
    • Your dist sharedirs are in share, if you have any.
    • Your module is maintained with Git and git ls-files matches with what you will release.
    • Your module has a static list of prerequisites that can be described in a cpanfile.
    • Your module has a Changes file.
    • You want to install via cpanm.

    I do have a working page storage engine, which is good, but the interaction engine is definitely nowhere. I’m coming back to the project with fresh eyes, and I’m going to redesign it top-to-bottom to use htmx for all the backend interaction.

    Looking forward to this, and the next iteration of WebWebXNG starts now.

  • “Projects in Flight”

    First a confession. I tend to have enthusiasms, work hard on them for a while, and then have something else interesting come across my radar, which will then become my new enthusiasm. This tends to lead to a lot of half-completed things, which I then feel bad about and avoid, causing me to not get anything done, making me feel even worse.

    I’ve decided that I’m going to try a different strategy: “projects in flight”. I’m embracing the fact that I have enthusiasms, and lots of them. I contain multitudes. And this is good.

    So instead of feeling bad that I have a dozen projects that aren’t getting anywhere, I’m going to acknowledge that I have a lot of interests, and more of them than I have time to do. So some of them don’t pan out. Some of them get partway through, and then I discover that the problem is better solved a different way, or that the thing I want to do isn’t actually as good as I thought, or whatever. I am allowed to fail.

    Think about it this way: for every Google or Facebook, there are a hundred startups that try to do something, get partway in, and fail. Maybe the idea wasn’t so great. Maybe the resources to do the thing they wanted to do just aren’t feasible, or available, or affordable. Maybe they just can’t get someone to give them the seed money to try.

    All these projects fail. And the entrepreneurs don’t feel bad about themselves if they do. They gave it the shot they could give it, with the effort and resources they had at hand, and it didn’t work out – and they move on to their next project.

    So I’ve decided to embrace the entrepreneurial mindset for my personal projects. I’m keeping a list of everything I’m doing, from the trivial to the complex, and allowing myself to be happy that I am creative and multifaceted; if something doesn’t get done, it stays on the list as something to come back to, unless I decide it’s not worth coming back to…and then it goes into the “idea pool”. Maybe it’ll trigger something else later. Maybe it won’t. It’s fine.

    It hasn’t failed. I haven’t failed. I’ve just discovered something that as I approached it this time, it didn’t succeed. It was my AltaVista, or Ask Jeeves, or Yahoo! Search instead of my Google. Maybe on another look later, with more information, more experience, more time, more energy it will succeed.

    But I don’t have to feel bad about it anymore. I can be proud and happy that I’m trying things and doing things. Yes, I do want to finish things too, but I can stop looking at the unfinished things and thinking that I’m failing because they’re not all done and perfect.

    So: I have a dozen or so projects in flight, at various levels of done, and I’m happy that I have interesting things to do!

  • JSON, Codable, and an illustration of ChatGPT’s shortcomings

    A little context: I’m updating the RadioSpiral app to use the (very nice) Radio Station Pro API that gives me access to useful stuff like the station calendar, the current show, etc. Like any modern API, it returns its data in JSON, so to use this in Swift, I need to write the appropriate Codable structs for it — this essentially means that the datatypes are datatypes that Swift either can natively decode, or that they’re Codable structs.

    I spent some time trying to get the structs right (the API delivers something that makes this rough, see below), and after a few tries that weren’t working, I said, “this is dumb, stupid rote work – obviously a job for ChatGPT.”

    So I told it “I have some JSON, and I need the Codable Swift structs to parse it.” The first pass was pretty good; it gave me the structs it thought were right and some code to parse with – and it didn’t work. The structs looked like they matched: the fields were all there, and the types were right, but the parse just failed.

    keyNotFound(CodingKeys(stringValue: "currentShow", intValue: nil), Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "broadcast", intValue: nil)], debugDescription: "No value associated with key CodingKeys(stringValue: \"currentShow\", intValue: nil) (\"currentShow\").", underlyingError: nil))

    Just so you can be on the same page, here’s how that JSON looks, at least the start of it:

    {
    	"broadcast": {
    		"current_show": {
    			"ID": 30961,
    			"day": "Wednesday",
    			"date": "2023-12-27",
    			"start": "10:00",
    			"end": "12:00",
    			"encore": false,
    			"split": false,
    			"override": false,
    			"id": "11DuWtTE",
    			"show": {...

    I finally figured out that Swift, unlike Go, must have field names that exactly match the keys in the incoming JSON. So if the JSON looks like {broadcast: {current_show... then the struct modeling the contents of the broadcast field had better have a field named current_show, exactly matching the JSON. (Go’s JSON parser uses annotations to map the fields to struct names, so having a field named currentShow is fine, as long as the annotation says its value comes from current_show. That would look something like this:

    type Broadcast struct {
        currentShow  CurrentShow `json:currentShow`
        ...
    }
    
    type CurrentShow struct {
       ... 

    There’s no ambiguity or translation needed, because the code explicitly tells you what field in the struct maps to what field in the JSON. (I suppose you could completely rename everything to arbitrary unrelated names in a Go JSON parse, but from a software engineering POV, that’s just asking for trouble.)

    Fascinatingly, ChatGPT sort of knows what’s wrong, but it can’t use that information to fix the mistake! “I apologize for the oversight. It seems that the actual key in your JSON is “current_show” instead of “currentShow”. Let me provide you with the corrected Swift code:”. It then provides the exact same wrong code again!

    struct Broadcast: Codable {
        let currentShow: BroadcastShow
        let nextShow: BroadcastShow
        let currentPlaylist: Bool
        let nowPlaying: NowPlaying
        let instance: Int
    }

    The right code is

    struct Broadcast: Codable {
        let current_show: BroadcastShow // exact match to the field name
        let next_show: BroadcastShow.   // and so on...
        let current_playlist: Bool
        let now_playing: NowPlaying
        let instance: Int
    }

    When I went through manually and changed all the camel-case names to snake-case, it parsed just fine. (I suppose I could have just asked ChatGPT to make that correction, but after it gets something wrong that it “should” get right, I tend to make the changes myself to be sure I understood it better than the LLM.)

    Yet another illustration that ChatGPT really does not know anything. It’s just spitting out the most likely-looking answer, and a lot of the time it’s close enough. This time it wasn’t.

    On the rough stuff from the API: some fields are either boolean false (“nothing here”) or a struct. Because Swift is a strongly-typed language, this has to be dealt with via an enum and more complex parsing. At the moment, I can get away with failing the parse and using a default value if this happens, but longer-term, the parsing code should use enums for this. If there are multiple fields that do this it may end up being a bit of a combinatorial explosion to try to handle all the cases, but I’ll burn that bridge when I come to it.