Category: Azuracast

  • How not to repair your Azuracast

    Recently, we’ve been working on trying to build some tooling to make our Azuracast experience for our DJs and listeners a little better.

    Shooting myself in the foot: background

    We’ve been trying to work around a longstanding bug: when a new streamer connects to Azuracast, Azuracast’s Liquidsoap processing picks up the last thing the previous streamer sent as now-playing metadata, and sets it as the metadata for the new streamer.

    This makes a lot of sense if you’re coding for the situation where a streamer loses connectivity and then resumes; generally this will be short, so preserving the now-playing metadata makes the best sense.

    However, we have a rotating set of DJs who each stream for a relatively short time – our standard show is 2 hours long. So this means that if DJ One signs off, and DJ Two starts streaming without sending new metadata after they’ve connected, then DJ Two’s set seems to be a continuation of DJ One’s signoff. This is confusing, and for streamers who prefer to simply connect and stream, means that their metadata will be “wrong” for a considerable part of the show.

    Azuracast’s now-playing APIs say that we should be able to send the stream metadata any time with a call to the API:

    curl -X POST \
         --location 'https:///api/station/1/nowplaying/update' \
         --header 'Content-Type: application/json' \
         --header 'X-API-Key: xxxx:xxxx' \
         --data '{ "title" : "Live Broadcast", "artist" : ""}'

    The only problem is that on our installation running Azuracast 0.22.1, this returns a 200 and does absolutely nothing. Looking at the logs inside Azuracast, the request is being rejected because a streamer is active. I opened a bug for this, and the recommended solution was to upgrade to the current stable release, 0.23.1.

    Round 1: Upgrading Azuracast

    2025-10-19, 9 pm: I’d upgraded Azuracast before and it had been pretty much completely seamless: put up a notice, run the Azuracast updater, broadcasting stops a second, and then the new version resumes right where it left off.

    Super easy, barely an inconvenience.

    After our 7 pm show on Sunday, I noted we’d taken a nightly automated backup of our current 0.22.1 installation, and then went ahead and upgraded: broadcasting stopped a second, the UI reloaded. I had to log back in, and we were still playing the same track. Fantastic! All according to plan. I had not taken a full backup of my installation because we all know Azuracast always updates just fine.

    This was critical error #1.

    2025-10-20, 7:15 pm: The next evening, however, I tried to stream my show. All went well until about an hour and a half in, and suddenly the audio started to stutter and glitch. Badly. I took a look at the Liquidsoap logs on Azuracast and they were not pretty.

    2025/10/21 19:18:55 [clock.local_1:2] Latency is too high: we must catchup 54.91 seconds! Check if your system can process your stream fast enough (CPU usage, disk access, etc) or if your stream should be self-sync (can happen when using `input.ffmpeg`). Refer to the latency control section of the documentation for more info.
    
    ...
    
    2025/10/21 19:18:56 [clock.local_1:2] Latency is too high: we must catchup 54.97 seconds! Check if your system can process your stream fast enough (CPU usage, disk access, etc) or if your stream should be self-sync (can happen when using `input.ffmpeg`). Refer to the latency control section of the documentation for more info.
    
    ...
    
    2025/10/21 19:18:57 [clock.local_1:2] Latency is too high: we must catchup 55.03 seconds! Check if your system can process your stream fast enough (CPU usage, disk access, etc) or if your stream should be self-sync (can happen when using `input.ffmpeg`). Refer to the latency control section of the documentation for more info.
    2025/10/21 19:18:57 [input_streamer:2] Generator max buffered length exceeded (441000 < 441180)! Dropping content..

    And so on. You can see that Liquidsoap is having a worse and worse time trying to consume my stream and send it on. I eventually stopped my show early; Liquidsoap did not recover as I expected it to, so I restarted Azuracast, and watched as the AutoDJ happily streamed away, and resolved to look at it the next day.

    No reports of problems, so I assumed it was a fluke.

    2025-10-21 7:30 pm: The next show that day, it happened again, and was just as bad. The Tuesday DJ also cut his show short.

    We had a very, very broken Azuracast, and there was an all-day streaming concert planned for Saturday, four days away.

    Round 2: rollback did not roll

    2025-10-21, 8pm: I started working right after the cancelled show, reasoning that we were indeed very much under time pressure, and that multiple restarts/crashes/reinstalls during our stations primary listener hours would be a bad idea.

    I decided to try a rollback to 0.22.1, where we’d been streaming just fine. Unfortunately, I was lacking a critical piece of information.

    When you run Azuracast’s ./docker.sh install, you must “pin” the release level you want in azuracast.env if you don’t want the most recent version. This is not documented in big bold DO THIS OR YOU WILL BE COMPLETELY SCREWED letters in the Azuracast install docs, because of course you always want the most recent stable version, why wouldn’t you?

    So I, and ChatGPT, my faithful (but unfortunately clueless about pinning versions, critical error #2: I had picked the wrong tool for the job because it gave me more answers for free) companion, embarked on getting the server fixed.

    I went through multiple iterations of “I’ve reinstalled the server and it’s upgraded itself to 0.23.1 again”. I tried multiple ways to just install 0.22.1 and leave it there.

    2025-10-21, 10:02 pm: I downloaded the code at the 0.22.1 tag and tried to run it in development mode and reinstall my automatic backup. It upgraded itself to 0.23.1.

    2025-10-21, 10:40pm: I tried building all the Docker images myself at 0.22.1 and restoring the backup. It upgraded itself.

    I tried downloading the Docker images, restoring, and just running them. It upgraded itself.

    2025-10-21, 11:55pm: I managed to dig up a full backup of our 0.22.0 install, which was around a year old. This wasn’t ideal, but it was better than nothing at all, and restored it, then tried to install 0.22.1 from source. It chugged for a long time doing the restore…and upgraded itself to 0.23.1.

    2025-10-21, 12:24 am: I then made critical error #3: I concluded that the 0.23.1 database on the database docker volume was the problem, and that I needed to deinstall Azuracast and retry the 0.22.1 install, following the documented deinstall/reinstall process. This was a bad idea, because it deleted the Docker volumes from my Azuracast install…and then erased them. So now I’d lost all my station media, all my podcasts, and all my playlists. I was very hosed. [If I had not made critical error #1 (skipping the full backup), critical error #3 would not have been a problem.]

    2025-10-21, 1:34 am: painstaking reload of the data from the old backup. It upgraded itself again.

    2025-10-21, 3:22 am: tried again, more carefully. Restore. Wait. Watch it upgrade itself again.

    2025-10-21, 4:41 am: Nothing I could think of, or that ChatGPT could think of, could fix it. We were down, hard.

    2025-10-21, 5:21 am: The rest of the team is starting to come on line. Everything was broken, I was exhausted. They chase me off to bed, and I tried to sleep.

    The rest of the team comes through

    2025-10-22, 6 am: The rest of the team is up and online. ʞu¡0ɹʞS posts a neutral “we’re down for maintenance” banner on radiospiral.net. Southwind Niehaus suggests that she can provide an alternate Azuracast server for Saturday at 0.21.0, and the team pitches in to get that server set up to be a backup.

    2025-10-22, 10 am: Mr. Spiral approves the switchover to Southwind’s server, and offers to send Gypsy Witch the tracks she needs to do her show. (She uses downloads from Azuracast to fill out her playlists.)

    2025-10-22, 10:19 am: passing out the alternate server URL to Second Life denizens starts. It is decided to not change DNS to Southwind’s server because of propagation times.

    2025-10-22, 10:27 am: plans to populate the substitute server proceed apace.

    2025-10-22, 12:16 pm: Radiospiral.net web player repointed to substitute server, but metadata is not working. Phone alerts woke me up enough that I was able to supply the right now-playing metadata URL to ʞu¡0ɹʞS .

    2025-10-22, 12:29 pm: Radiospiral.net is switched over. I update the iOS radio app’s config data on GitHub and confirm we have music but no metadata in the app; the metadata server URL was hardcoded in the released version of the app. I make a note to push out a new version with the metadata in the config file.

    2025-10-22, 1:09 pm: I am able to find the version on the App Store and make the fix.

    2025-10-22, 1:36 pm: Test version of the app up and available to beta testers.

    2025-10-22, 2:08 pm: The substitute stream is working in all the correct places in Second Life as well. We close the PI.

    I continued work on the iOS app; the real blocker was getting the screenshots right! Once that was done, I submitted the new version of the app on 10-25 and had an approval and the new version on the App Store by 10-26. Everything was working well with the substitute server, the Saturday show was successful, Southwind’s server handled the load perfectly, and kept going just fine, streaming shows and AutoDJing, while I resumed work on restoring 0.22.1.

    Actually fixing it, day 1

    I had used Claude to help verify the fixes I made to the iOS app, and it worked so much better than ChatGPT on code generation that I went ahead and subscribed at the $20/month level.

    I brought up Claude on the Azuracast server, showed it the checked-out source code repo, and asked for help solving the problem of getting to and staying on 0.22.1.

    Claude immediately told me about AZURACAST_VERSION, version pinning, and azuracast.env. [Looking back over the timeline, I wasted somewhere around 14.5 hours not knowing about that.]

    We set the AZURACAST_VERSION=0.22.1 in azuracast.env.

    Claude suggested a two-stage strategy to restore the nightly from just before the failed upgrade, and the old full backup.

    First, I checked out Azuracast again at the 0.22.1 tag and let it install itself. Claude found and fixed a couple issues that were keeping it from building.

    Once that was up and I had somewhere I could restore the files to, I first restored the old, full, backup. This got me back the media files, but not the playlists, stations, or podcasts. (It would turn out that the podcasts weren’t in that backup at all because we hadn’t started hosting them on Azuracast yet when it was taken.) That took about two hours.

    We then restored the nightly over the old backup to get the station settings back. That took only a minute, and restored the current configs and database (including playlists). I had to reset my Azuracast login password (the azuracast:account:reset-password CLI command did that).

    Because the database and the media library were not in sync, I had a lot of unassigned tracks in the library that I was going to need to get into proper playlists.

    Claude helped me build SQL queries and a small PHP program to categorize the tracks by duration

    • < 2 minutes,which are often noisy and/or disruptive
    • 2 minutes to 20 minutes (our standard AutoDJ tracks)
    • > 30 minutes, which get played on “long-play Sundays”

    and sort them into the existing playlists where they were supposed to go. The few remaining < 2-minute tracks were listened to and filed appropriately. This in total took about an hour, and the server was back in good shape.

    Day 2: Future-Proofing (~2 hours)

    We discussed what we could do to stop testing in prod. Claude suggested a blue-green deployment strategy — one known-good server at all times, so we could flip from one to the other after doing testing.

    We created /var/azuracast-staging to have somewhere to build the second server, and configured it to use ports 8000/8443 for its web interface, and the station ports on the 10xxx ports.

    The media storage is shared between prod and staging; staging has read-only access. (This is sort of useful; it doesn’t allow us to move media around on the staging server, and I may switch it to just having its own volumes that I can swap to whichever instance is currently “production”.)

    There’s now a DISASTER-RECOVERY.md document, a complete disaster recovery guide with all scenarios and an azuracast-upgrade-strategies.md that documents the blue-green deployment.

    Lessons learned

    If one is dealing with a PI with which one is not 100% a subject-matter expert, it is critical to have one available, whether a human or LLM one. I chose the wrong LLM one: as soon as I had Claude look at the configuration and told it I wanted to be running at 0.22.1 and stay there, it told me about pinning the version in azuracast.env.

    Testing in production, which is what I ended up doing with the upgrade to 0.23.1, was a bad idea. I worked with Claude to come up with a setup allowing me to run a staging Azuracast server in parallel with the production one. This lets me try things on a server that’s okay to break. It’s probably an idea to have a dev one too, but I’ll come back to that later.

    Carefully integrating full backups into the upgrade process at the correct points is critical to being able to roll back as quickly as possible. (This is carefully documented in the disaster recovery document. The recommended number of backups uses around half a terabyte of storage, but it carefully checkpoints everything along the way.)

    It’s still possible to be down for an hour or more, but not for the multiple days that resolving this took this time.

  • Azuracast stream monitoring: the Greedy Shark

    My ear is open like a greedy shark,
    To catch the tunings of a voice divine.

    • John Keats, Woman! when I behold thee flippant, vain

    Why the Shark?

    As one of the people managing RadioSpiral, I’m the one who’s in charge of the actual audio streaming server. One of our goals is to be up and running with an active audio stream 24/7. Most of the time, this is handled by an AutoDJ bot run by the Azuracast server.

    We also have live DJs/performers who stream music to the station; when they connect, this pre-empts the AutoDJ.

    Most of the time this all works well: the AutoDJ keeps tunes spinning, the DJs cut in to do their shows, and the AutoDJ takes over again when they disconnect.

    Every once in a while, though, things don’t go as planned: network outages, DJ tech troubles, and the like. And when those happen, we go off the air.

    Perhaps unsurprisingly, there are loads of tools for monitoring computer processes — is the server up, does it serve web pages, are there any errors — but almost nothing for monitoring audio.

    So it was necessary to create one.

    What the Shark does

    The Shark exists to avoid the worst problem a radio station can have: dead air.

    It’s a Python-based monitoring system that:

    1. Actually listens to your stream using ffmpeg to capture audio samples and analyze them for silence
    2. It knows the difference between “off the air” and “off the air because the DJ’s not sending audio” by using the AzuraCast APIs to check the server status and determine what’s going on.
    3. It alerts our tech team via notifications in a private channel if the AutoDJ drops.
    4. It takes action itself to force streamers off if they stop sending audio by suspending them via the AzuraCast API.
    5. It has an associated Discord bot that lets the team check status and bring suspended streamers back without requiring a login to the AzuraCast server.

    How It Works

    To keep all this straight, I decided to use a state machine with three modes:

    • NO_STREAMER (No DJ connected): after 2 minutes of silence send a “Silence detected!” message to a private Discord channel
    • STREAMER_ACTIVE (DJ connected but silent): after 8 minutes of dead air, send a “suspension imminent” warning to the private channel, and after two more minutes, auto-suspend the streamer
    • GRACE_PERIOD: (DJ is silent, but indicates they know it): monitoring paused for 15 minutes, preventing the DJ from getting booted while they’re sweating to fix their tech issue

    This gives us some basic logic with room to expand it if we have new cases later.

    Every 60 seconds, the Shark:

    1. Captures a 10-second audio sample via FFmpeg
    2. Analyzes RMS (volume) and variance (is it just silence or a stuck tone?)
    3. Updates a consecutive silences counter
    4. Checks whether the DJ has acknowledged a silence
    5. Takes action based on current state

    This gives us reasonable granularity and keeps the amount of data crunching down.

    A paired Discord bot lets us query the state of the Shark, inform the monitor that we’re “working on it”, and unsuspend streamers who’ve been unable to or forgot to get streaming again.

    • !shark-status – Current monitoring state and suspended users
    • !working-on-it (or !woi)- Activate a 15-minute grace period
    • !sharked – List all auto-suspended DJs
    • !letin – Re-enable a suspended DJ
    • !status – Check overall Shark status

    The !woi command was the thing that turned the Shark from a somewhat annoying monitor into a useful DJ tool: if you get hung up, you know you can tell the Shark to leave you be for a bit while you fix stuff. And !letin keeps our less-technical DJs happy if they do happen to accidentally suspend themselves.

    Open Source

    The whole thing is on GitHub: https://github.com/joemcmahon/greedy-shark

    MIT licensed, fully documented, with Docker Compose setup and comprehensive tests. Got an AzuraCast server? Try it yourself!

    What’s Next

    Other stuff we could do:

    • Multi-stream support — we won’t need this, but someplace set up like SomaFM with a zillion different streams might want that. If we did that, a Web dashboard for monitoring history and status would be better than a Discord channel as the primary interface
    • SMS alerts for critical issues — most of us leave our Discord alerts on enough that a ping from the Shark will get through, even late, and we’re sufficiently spread-out geographically that someone will see the alert, but serious “we are really, really down, like hours” alters should get sent in a “dude, you need to see this” way.

    Try It Yourself!

    If you run an internet radio station with AzuraCast, you can deploy this in about 10 minutes:

    1. Clone the repo
    2. Copy .env.example to .env
    3. Fill in your AzuraCast API credentials and Discord target channel
    4. Invite the bot to your server
    5. docker compose up -d

    And the Shark will be catching the tunings of your station.

    Please let me know of any problems via the Issues on GitHub, have fun, and send patches if you do something interesting with it!

  • Azuracast metadata redux

    Summary: all for naught, back to the original implementation, but with some guardrails

    Where we last left off, I was trying to get the LDSwiftEventSource library to play nice with iOS, and it just would not. Every way I tried to convince iOS to please let this thing run failed. Even the “cancel and restart” version was a failure.

    So I started looking at the option of a central server that would push the updates using notifications, and being completely honest, it seemed like an awful lot of work that I wasn’t all that interested in doing, and which would push the release date even further out.

    On reflection, I seemed to remember that despite it being fragile as far as staying connected, the websocket implementation was rock-solid (when it was connected). I went back to that version (thank heavens for git!) and relaunched…yeah, it’s fine. It’s fine in the background. All right, how can I make this work?

    Thinking about it for a while, I also remembered that there was a ping parameter in the connect message from Azuracast, which gave the maximum interval between messages (I’ve found in practice that this is what it means; the messages usually arrive every 15 seconds or so with a ping of 25). Since I’d already written the timer code once to force reboots of the SSE code, it seemed reasonable to leverage it like this:

    • When the server connects, we get the initial ping value when we process the first message successfully.
    • I double that value, and set a Timer that will call a method that just executes connect() again if it pops.
    • In the message processing, as soon as I get a new message, I therefore have evidence that I’m connected, so I kill the extant timer, process the message, and then set a new one.

    This loops, so each time I get a message, I tell the timer I’m fine, and then set a new one; if I ever do lose connectivity, then the timer goes off and I try reconnecting.

    This still needs a couple things:

    • The retries should be limited, and do an exponential backoff.
    • I’m of two minds as to whether I throw up an indicator that I can’t reconnect to the metadata server. On one hand, the metadata going out of sync is something I am going to all these lengths to avoid, so if I’m absolutely forced to do without it, I should probably mention that it’s no longer in sync. On the other hand, if we’ve completely lost connectivity, the music will stop, and that’s a pretty significant signal in itself. It strikes me as unlikely that I’ll be able to stream from the server but not contact Azuracast, so for now I’ll just say nothing. Right now, I fall back to showing the channel metadata, so we still see we’re on RadioSpiral, but not what’s actually playing — just like when I didn’t have a working metadata implementation at all.

    I’m running it longer-term to see how well it performs. Last night I got 4 hours without a drop on the no-timer version; I think this means that drops will be relatively infrequent, and we’ll mostly just schedule Timers and cancel them.

    Lockscreen follies

    I have also been trying to get the lock screen filled out so it looks nicer. Before I started, I had a generic lockscreen that had the station logo, name and slug line with a play/pause button and two empty “–:–” timestamps. I now have an empty image (boo) but have managed to set the track name and artist name and the play time. So some progress, some regress.

    The lockscreen setup is peculiar: you set as many of the pieces of data that you know in a struct supplied by iOS, and then call a method to commit it.

    I spent a lot of time trying to get the cover to appear and couldn’t, so I left it as the channel/station logo. [Update August 2025: I’ve managed to get Cursor to work through the mess and show all the metadata! Yay.]

  • Azuracast high-frequency updates, SSE, and iOS background processes

    A big set of learning since the last update.

    I’ve been working on getting the RadioSpiral infrastructure back up to snuff after our Azuracast streaming server upgrade. We really, really did need to do that — it just provides 90% of everything we need to run the station easily right out of the box.

    Not having to regenerate the playlists every few weeks is definitely a win, and we’re now able to easily do stuff like “long-play Sunday”, where all of the tracks are long-players of a half-hour or more.

    But there were some hitches, mostly in my stuff: the iOS app and the now-playing Discord bot. Because of reasons (read: I’m not sure why), the Icecast metadata isn’t available from the streaming server on Azuracast, especially when you’re using TLS. This breaks the display of artist and track on the iOS app, and partially breaks the icecast-monitor Node library I was using to do the now-playing bot in Discord.

    (Side note: this was all my bright idea, and I should have tested the app and bot against Azuracast before I proposed cutting over in production, but I didn’t. I’ll run any new thing in Docker first and test it better next time.)

    Azuracast to the rescue

    Fortunately, Azuracast provides excellent now-playing APIs. There a straight-up GET endpoint that returns the data, and two event-driven ones (websockets and SSE). Even a “look, just read this file, it’s there” version.

    The GET option depends on you polling the server for updates, and I didn’t like that on principle; the server is quite powerful, but I don’t want multiple copies of the app hammering it frequently to get updates, and it was inherently not going to be close to a real-time update unless I really did hammer the server.

    So that was off the table, leaving websockets and SSE, neither of which I had ever used. Woo, learning experience. I initially tried SSE in Node and didn’t have a lot of success with it, so I decided to go with websockets and see how that went.

    Pretty well actually! I was able to get a websocket client running pretty easily, so I decided to try it that way. After some conferring with ChatGPT, I put together a library that would let me start up a websocket client and run happily, waiting for updates to come in and updating the UI as I went. (I’ll talk about the adventures of parsing Azuracast metadata JSON in another post.)

    I chose to use a technique that I found in the FRadioPlayer source code, of declaring a public static variable containing an instance of the class; this let me do

    import Kingfisher
    import ACWebSocketClient
    
    client = ACWebSocketClient.shared
    ...
    tracklabel.text = client.status.track
    artistlabel.text = client.status.artist
    coverImageView.kf.getImage(with:client.status.artURL)

    (Kingfisher is fantastic! Coupled with Azuracast automatically extracting the artwork from tracks and providing a URL to it, showing the right covers was trivial. FRadioPlayer uses the Apple Music cover art API to get covers, and given the, shall we say, obscure artists we play, some of the cover guesses it made were pretty funny. And sometimes really inappropriate.)

    Right. So we have metadata! Fantastic. Unfortunately, the websocket client uses URLSessionWebSocketTask to manage the connection, and that class has extremely poor error handling. It’s next to impossible to detect that you’ve lost the connection or re-establish it. So It would work for a while, and then a disconnect would happen, and the metadata would stop updating.

    Back to the drawing board. Maybe SSE will work better in Swift? I’ve written one client, maybe I can leverage the code. And yes, I could. After some searching on GitHub and trying a couple of different things, I created a new library that could do Azuracast SSE. (Thank you to LaunchDarkly and LDSwiftEventSource for making the basic implementation dead easy.)

    So close, but so far

    Unfortunately, I now hit iOS architecture issues.

    iOS really, really does not want you to run long-term background tasks, especially with the screen locked. When the screen was unlocked, the metadata updates went okay, but as soon as the screen locked, iOS started a 30-second “and what do you think you’re doing” timer, and killed the metadata monitor process.

    I tried a number of gyrations to keep it running and schedule and reschedule a background thread, but if I let it run continuously, even with all the “please just let this run, I swear I know what I need here” code, iOS would axe it within a minute or so.

    So I’ve fallen back to a solution not a lot better than polling the endpoint: when the audio starts, I start up the SSE client, and then shut it down in 3 seconds, wait 15 seconds, and then run it again. When audio stops, I shut it off and leave it off. This has so far kept iOS from nuking the app, but again, I’m polling. Yuck.

    However, we now do have metadata, and that’s better than none.

    [From the future: this just was awful. I abandoned it and went back to the websockets. New update coming soon about some optimizations to save battery.]

    On the other hand…

    On the Discord front, however, I was much more successful. I tried SSE in Node, and found the libraries wanting, so I switched over to Python and was able to use sseclient to do the heavy lifting for the SSE connection. It essentially takes an SSE URL, hooks up to the server, and then calls a callback whenever an event arrives. That was straightforward enough, and I boned up on my Python for traversing arbitrary structures — json.loads() did a nice job for me of turning the complicated JSON into nested Python data structures.

    The only hard bit was persuading Python to turn the JSON struct I needed to send into a proper query parameter. Eventually this worked:

    subs = {
            "subs": {
                f"station:{shortcode}": {"recover": True}
            }
         }
    
    json_subs = json.dumps(subs, separators=(',', ':'))
    json_subs = json_subs.replace("True", "true").replace("False", "false")
    encoded_query = urllib.parse.quote(json_subs)

    I pretty quickly got the events arriving and parsed, and I was able to dump out the metadata in a print. Fab! I must almost be done!

    But no. I did have to learn yet another new thing: nonlocal in Python.

    Once I’d gotten the event and parsed it and stashed the data in an object, I needed to be able to do something with it, and the easiest way to do that was set up another callback mechanism. That looked something like this:

    client = build_sse_client(server, shortcode)
    run(client, send_embed_with_image)

    The send_embed_with_image callback puts together a Discord embed (a fancy message) and posts it to our Discord via a webhook, so I don’t have to write any async code. The SSE client updates every fifteen seconds or so, but I don’t want to just spam the channel with the updates; I want to compare the new update to the last one, and not post if the track hasn’t changed.

    I added a method to the metadata object to compare two objects:

    def __eq__(self, other) -> bool:
        if not isinstance(other, NowPlayingResponse):
            return False
        if other is None:
            return False
        return (self.dj == other.dj and
                self.artist == other.artist and
                self.track == other.track and
                self.album == other.album)

    …but I ran into a difficulty trying to store the old object: the async callback from my sseclient callback couldn’t see the variables in the main script. I knew I’d need a closure to put them in the function’s scope, and I was able to write that fairly easily after a little poking about, but even with them there, the inner function I was returning still couldn’t see the closed-over variables.

    The fix was something I’d never heard of before in Python: nonlocal.

    def wrapper(startup, last_response):
        def sender(response: NowPlayingResponse):
            nonlocal startup, last_response
            if response == last_response:
                return
    
            # Prepare the embed data
            local_tz = get_localzone()
            start = response.start.replace(tzinfo=local_tz)
            embed_data = {
                "title": f"{response.track}",
                "description": f"from _{response.album}_ by {response.artist} ({response.duration})",
                "timestamp": start,
                "thumbnail_url": response.artURL,
            }
    
            # Send to webhook
            send_webhook(embed_data)
    
            startup = False
            last_response = response
    
        return sender

    Normally, all I’d need to do would be have startup and last_response in the outer function’s argument list to have them visible to the inner function’s namespace, but I didn’t want them to just be visible: I wanted them to be mutable. Adding the nonlocal declaration of those variables does that. (If you want to learn more about nonlocal, this is a good tutorial.)

    The Discord monitor main code now looks like this:

    startup = True
    last_response = None
    
    # Build the SSE client
    client = build_sse_client(server, shortcode)
    
    # Create the sender function and start listening
    send_embed_with_image = wrapper(startup, last_response)
    run(client, send_embed_with_image)

    Now send_embed_with_image will successfully be able to check for changes and only send a new embed when there is one.

    One last notable thing here: Discord sets the timestamp of the embed relative to the timezone of the Discord user. If a timezone is supplied, then Discord does the necessary computations to figure out what the local time is for the supplied timestamp. If no zone info is there, then it assumes UTC, which can lead to funny-looking timesstamps. This code finds the timezone where the monitor code is running, and sets the timestamp to that.

    from tzlocal import get_localzone
    
    local_tz = get_localzone()
    start = response.start.replace(tzinfo=local_tz)

    And now we get nice-looking now-playing info in Discord:

    Shows two entries in a Discord channel, listing track title in bold, album name in italics, and artist name, with a start time timestamp and a thumbnail of the album cover.

    Building on this

    Now that we have a working Python monitor, we can now come up with a better solution to (close to) real-time updates for the iOS app.

    Instead of running the monitor itself, the app will register with the Python monitor for silent push updates. This lets us offload the CPU (and battery) intensive operations to the Python code, and only do something when the notification is pushed to the app.

    [Note: no, it’s not doing that.]

    But that’s code for next week; this week I need to get the iOS stopgap app out, and get the Python server dockerized.