Author: Joe McMahon

  • Building ‘use English;’ into the Perl core

    Perl has a list of “stuff we really want to add to the language that needs someone to code it in C,” called PPCs. PPC 14 adds English aliases to Perl’s controversial “punctuation” variables (like $", $?, $., etc.), and I’ve decided to try taking this one on.

    I know some of the internals stuff from a long-ago class at a Perl conference, and from Jarkko’s chapter on the internals in Advanced Perl Programming, but this is the first time I’ve actually dived into serious C programming other than a fling with Objective-C back in the day…and I kind of like it.

    C absolutely is just enough tool to get the job done, and I’ve actually kind of missed that. Most of the work I was doing at Zip toward the end of my time there was all Scala, and Scala is a nice language but it’s…heavyweight. Takes an age to build and test. Even with a fairly big recompile of the whole interpreter, an edit-build-test cycle is pretty fast in C.

    The working experience is a lot like Go, just with way fewer guardrails.

    The coding experience, however, is very Zen: a series of enlightenments is necessary to proceed. I have looked at the Perl code a little before, but this project is much more complex than anything I’ve tried before in C. It’s really a process of reading the code, reasoning about it, taking a shot at something, discovering it was more complex that you thought, and continuing until the light dawns.

    First enlightenment

    I started off looking at the XS code in English::Name to see if I could out-and-out steal it. Unfortunately not, but it did start giving me some hints as to what I could do.

    (At this point, I’m going to start talking about Perl internals, and this may become a lot less clear. Sorry about that.)

    Each variable in Perl is represented by a “glob” held in a symbol table hash. A glob is a data structure that can hold all the different types of thing a given name can be — this is why, in Perl, you can have $foo and @foo and %foo all at once, because one glob (also known when working on the internals as a GV – “glob value”, I believe) can hold pointers to each kind of variable.

    I started out wondering if I could just alias the names directly when I saw them in Perl. For some of the read-only special scalar variables, you can do this by overwriting the SV (scalar variable) slot in the GV with a pointer to the aliased variable’s SV.

    The gv.c file contains the code that works with global variables, and the function S_gv_magicalize contains a big switch statement that parses the incoming variable names and then uses the sv_magic function to install hooks that are called when the variable is accessed (read or written). So the easiest, dumbest option is to try just sharing the SV that is created for the variable I want to alias with the new name.

    The code in S_gv_magicalize is essentially one big old switch statement; it uses a function called memEQs to check the incoming name against variable name strings to see if we should process the variables. The new variables I want to add all look like ${^SOMETHING}; this lets us look English-like, but looks different so we remember that this is a special variable. The code that parses the names converts the letter prefixed with a caret into a control character, so (say) ^C becomes \003; ^SOMETHING would be \023OMETHING, so that’s the string we plug in to memEQs:

    if (memEQs(name, len, "\023OMETHING)) ...

    Good, so we have a way to match the variables we’re interested in; now we just need to figure out how to alias the SVs. Poking around, I figured out that if I could find the target variable in the main:: symbol table, I could use a few of the macros that the Perl source provides to find the SV pointer in the old variable, and then assign it to the SV slot in the new GV I was creating. I realized I’d be doing this a lot, so I wrote a preprocessor macro of my own to do this. This was a bit tricky, because I needed to not just substitute a string into the get_sv call, but actually contatenate it into the string. Some Googling found me the C99 # operator that does the trick. Here’s that macro:

    #define SValias(vname) GvSV(gv) = newSVsv(get_sv("main::"#vname, 0))

    This tells Perl to look in the main:: symbol table for the variable whose name I’ve concatenated into the fully-qualified name and extract the contents of the SV slot for that variable. I then call newSVsv (build a new SV I can use out of this SV) and then assign it to the SV slot in the brand new GV that I’m building.

    Easy-peasy, add the aliases for all the variables…and this worked for a certain portion of the variables, but didn’t work at all for others. There also didn’t seem to be rhyme nor reason why this should work for some but not others.

    Second enlightenment

    I dived back into the code, and read it all through again. There were a lot of goto magicalize statements; (almost — of course, almost, why make it easy?) every special variable ends up jumping to this label, which calls

     sv_magic(GvSVn(gv), MUTABLE_SV(gv), PERL_MAGIC_sv, name, len);

    Well. What does that do? Going over to mg.c, where this function is defined, it takes a GV, an SV from that GV, both of which will be modified, and the last three parameters define the kind of magic to add, and name and len define the name being passed. Those are already set when we get here in gv.c, so my understanding at this point (yes, another enlightenment is needed!) was, “okay, we have a GV, and we’re passing a name and length, so this must be keying off the name to assign the right magic. Obviously if I can pass the GV I have but a different name and len, then the Right Thing will happen in mg.c and this will work perfectly.”

    So I tried a couple other variations to try to get remagicking the variable to work.

    1. Adding a block of code right below the sv_magic call to try to reassign the magic. This didn’t work; the call got made, but the variable did not have any magic.
    2. Passing a hardcoded alternate name and length to sv_magic. Also had no detectable effect.
    3. Refactor the code in mg.c so that I could create a new function that would allow me to pass a second name and len, so that I could do the reassignment inside mg.c instead. This also didn’t work, but not because the concept was wrong; I simply could not get the code to compile, because something in the macros was convinced that I should pass one more argument to the call to the refactored code, even though I wasn’t changing the calling sequence at all.

    I spent about a half-hour trying different variations of function calls and naming, and decided that was long enough; I needed to look again and see what was going on deeper down…and maybe find a way that was more compatible with the code already there.

    (Note: I did not want to change the calling sequence for sv_magic, or change its return value, because this would have been a change to the Perl API, potentially breaking lots of XS code, and potentially propagating lots of changes all over the Perl codebase itself.)

    Third enlightenment

    I went back to mg.c again and instead of looking at the code that applied the magic, I went to look at the code that implemented it instead. Reading through all of mg.c, and rereading gv.c, I found that the magic was implemented two different ways.

    • Some variables were set up directly in gv.c, in S_gv_magicalize. These were the variables that I’d been successful in aliasing with the SValias macro; they were read-only, and hard-linked to unchanging data.
    • The rest were set up in mg.c; they were detected as magic in gv.c, in S_gv_magicalize, which then jumped to the sv_magic call to pass the actual assignment of the magic to the SV.

    In mg.c, there are two different functions, Perl_magic_get and Perl_magic_set, which handle the magic for getting and setting the SV. (There are a bunch more Perl_magic functions, and it’s definitely possible I’ll need to learn more about those, but my current knowledge seems to indicate that these two are enough to do the implementation of the English variables.) We do the same kind of matching against names to decide what magic applies to the variable, and then execute the appropriate code to make the magic happen. This made sense based on what I knew already, and confirmed that the attempts to set a different name for the sv_magic call were not wrong; I just didn’t manage to implement something that did it properly.

    Given this, I decided to try implementing the English variations on two different variables: one a simple fixed read-only one implemented only in gv.c, and a second read-write one implemented in the Perl_magic_get and Perl_magic_set functions in mg.c to see if I’d actually understood the code.

    I also chose to go with the paradigm I’d seen throughout these big case statements: do the cases in alphabetical order, and use goto to jump to existing code that already implemented the feature. These gotos are always forward jumps, so they’re not quite so bad, but writing hard branches in code again certainly took me back a ways.

    Magic variable in gv.c alone: $] aliasing to ${^OLD_PERL_VERSION}

    $] provides the older floating-point representation of the Perl interpreter’s version. Looking at gv.c, there’s a block of code that looks like this:

             case ']':               /* $] */
             {
    
                 SV * const sv = GvSV(gv);
                 if (!sv_derived_from(PL_patchlevel, "version"))
                     upg_version(PL_patchlevel, TRUE);
                 GvSV(gv) = vnumify(PL_patchlevel);
                 SvREADONLY_on(GvSV(gv));
                 SvREFCNT_dec(sv);
             }
             break;

    We fetch the SV already in the variable; if it’s not already the version, then we make it the version, turn it into a number, stash it in the GV, make it readonly, and then decrement the refcount of this GV’s SV to prevent multiple frees of this data during global destruction at the end of the program.

    To implement ${^OLD_PERL_VERSION}, we need to catch it, and then do a goto to this code. Here’s the patch:

    | diff --git a/gv.c b/gv.c
    | index 93fc37da63..6c00b050db 100644
    | --- a/gv.c
    | +++ b/gv.c
    | @@ -2231,7 +2231,9 @@ S_gv_magicalize(pTHX_ GV *gv, HV *stash, const char *name, STRLEN len,
    |                      goto storeparen;
    |                  }
    |                  break;
    | -            case '\017':        /* ${^OPEN} */
    | +            case '\017':        /* ${^OPEN}, ${^OLD_PERL_VERSION} */
    | +                if(memEQs(name, len, "\017LD_PERL_VERSION"))
    | +                    goto old_perl_version;
    |                  if (memEQs(name, len, "\017PEN"))
    |                      goto magicalize;
    |                  break;
    | @@ -2430,7 +2432,9 @@ S_gv_magicalize(pTHX_ GV *gv, HV *stash, const char *name, STRLEN len,
    |              sv_setpvs(GvSVn(gv),"\034");
    |              break;
    |          case ']':            /* $] */
    | +          old_perl_version:
    |          {
    | +
    |              SV * const sv = GvSV(gv);
    |              if (!sv_derived_from(PL_patchlevel, "version"))
    |                  upg_version(PL_patchlevel, TRUE);

    It’s very straightforward; just reuse the code we have for $] for ${^OLD_PERL_VERSION} with a goto to that code. Tests show it works as expected:

    And running it:




  • Email handling: a rant

    Okay, this is probably preaching to the choir for anyone who reads my blog, but I’ve just gone through a supremely frustrating experience with Hilton and I’m going to vent, because I can.

    This is also partially humor, and an excuse to repeatedly mention the name of the person who triggered all this. Enjoy, Lisa Neumann of Spearfish, SD.

    The triggering incident

    Back in the day, specifically 2004, when GMail was new – so new that you had to know someone who could invite you to it – I got my name as my GMail address, because, hey, I could! 20 years on in hindsight, I should have constructed an alias and used that, because people are idiots and companies are as bad.

    So why am I ranting today in particular? Because Lisa Neumann, of Spearfish, SD (yes, I am SEOing the hell out of good ol’ Lisa here) decided that she wanted to open a Hilton Honors account. And like any sane person, she picked a random email out of the air, in this case mine, and used that. I know I always want to enter things like my home address and name, and send those to some random person on the internet who I don’t know who can then sign me up for all kinds of mailing lists or do any number of other nefarious things based on knowing my actual physical address.

    Pardon me, my sarcasm sequencer is overloading.

    Specifically, she used a variation on my GMail address. I use a version with a dot in it; she used one without. GMail allows you to add periods to your address in any combination you like, so if your GMail address is firstmiddlelast@gmail.com, then you can use first.middlelast@gmail.com, firstmiddle.last@gmail.com, first.middle.last@gmail.com, etc. etc.

    All of these are the same email address as far as Google is concerned, and this is not news. GMail has implemented addresses this way since 2004. However, large segments of the software engineer population do not seem to have figured this out, twenty year later. The Hilton engineers in particular have not, or have said, “not our problem, we just have to push signups”.

    (I pause to note that I have no idea who Lisa Neumann is, that I have never been to Spearfish, South Dakota, and that she absolutely had no reason to think using my email address was a good idea. I will also note that if I ever am in Spearfish, I know whose address to go to, and which apartment to go knock on the door of, to ask, “What exactly was going through your mind, Lisa, when you made your home address known to some random person on the internet?”.)

    In my current case, Hilton committed not one but two sins:

    • They allowed a dot-variation of a GMail address to create a new account. (I personally already had a Hilton account.)
    • They did not validate email access. So Lisa Neumann (and yes, I really hope this ends up high in the Google hits for good ol’ Lisa Neumann of Spearfish, SD) uses a random-ass email and Hilton’s software says “hyuk, okee-dokee!” and creates an account.

    Why am I ranting about this?

    Because it is stunningly common practice. People use email addresses they don’t own all the time, and companes who supposedly want valid data don’t care.

    It’s nuts. I have mentioned before on this blog that most of the different Joe McMahons that use my email are idiots, because they know damn well that they don’t own my GMail account and will never see the mails. Apparently they don’t care that the password-reset emails go to the email that they entered, and don’t control. And I use them.

    (Have I reset the password on multiple dating sites, and uploaded a bio that says, “In addition to all stuff about that, I am not very bright, because I used someone else’s email, and he has locked me out of this account. No sweet, sweet love for me!”? Yes, yes I have. Did I enjoy it? Oh, very much so.)

    The mails I get tend to be one of the following:

    • Someone has typed “joe mcmahon” (not the email address, but the name) into the “To” field, and GMail has happily filled in the most likely email, i.e., mine. If it wasn’t someone actually writing me, it’s a genuine mistake, and I don’t count that in the “what are these idiots doing” category. This most often happens when folks in Ireland are trying to send mail to a construction company (It’s Patrick there, BTW, in case someone stumbles on this while trying to figure out why he’s not getting their mail — though I do usually send a “you probably have the wrong email” to those folks, as this is only marginally their fault. Google, if they’ve never written to this person, do you think you should really do that? Maybe mention that an address was assumed, and maybe they should verify it’s right? Naaaaaah.)
    • On the other hand, we have the Joe McMahons who sign up for things. Gym memberships. Dating sites. Porn sites. Ashley Madison (a particular favorite, Joe McMahon in Australia. Don’t think I forgot.) I don’t know exactly how to judge these, though my hunch is that these are people who think Google is Magic and just putting their name and google.com will somehow get the email fairies to deliver stuff to them. Or they’re just really freaking lazy and are counting on the email not being validated. Or just don’t think about it, and when the account never gets approved because I delete the verification mail, they just assume “computers don’t work”.
    • Last we have the outright “I’m using this email and I know it’s not mine” folks, like dear old Lisa Neumann. Did I mention she’s from Spearfish, SD? It can’t be that they’re completely computer illiterate, else how would they know to use a random person’s name as an email address and expect it to work? Maybe Lisa Neumann knows/lives with/is married to a Joe McMahon in Spearfish? Can’t find one though. I’m grasping at straws here.

    But honestly, the people are not the issue here. It’s the software engineers and product managers who could keep this from happening.

    KPIs and “conversion” as a scourge on humanity

    So why would anyone implement a system guaranteed to make people hate them? Why would you implement a signup process that doesn’t care if you can send email to the person who’s signing up, when ostensibly, you want that address so you can send them email? Why would you implement a signup system that would tell me, some random dude on the internet, exactly where Lisa Neumann of Spearfish SD lives — street address and apartment number, with no recourse or warning?

    Because someone in the software development pipeline – almost certainly the product manager – has made the number of signups and/or the number of “conversions” (guest account -> permanent account) a success metric.

    It is a truism that if you make some metric critical to a system being judged as successful, people will manipulate the system and its implementation to maximize the value of that metric to the detriment of the actual goal.

    If you reward the team that closes the most bugs, teams will spam the issue tracker with trivial bugs and close them – and they’ll even add bugs to be fixed and closed.

    If you measure the success of the “conversion” page by the number of signups, then the engineers will be incentivized to “remove friction”. And the absolute easiest way to remove friction is to remove validation.

    In the case of email addresses, the dead easiest option is simply to not validate that the email is valid at all. Most engineers will not actually go that far, and allow obvious garbage to be entered as an email, but dropping the confirmation flow, or never implementing it, is a great way to get those numbers up. If any email at all, as long as it looks basically valid, is accepted, then the conversions go way up! Look, another account added! Even though the person will never be able to reset their password, or receive any notifications via email! Hey, that’s what app notifications are for anyway, and they push up our engagement KPIs! User support will figure out how to deal with the passwords!

    Sorry, need to reset the sarcasm sequencer again.

    So what is good practice?

    • If you need an email, then you validate that the person signing up can access that email. You send them an account validation link, and until they click that link, the account is not usable.
    • You follow the real world and not what the RFC says. Yes, technically, Google was incorrect to treat foobar@gmail and foo.bar@gmail as the same address, but I think their technical decision was “do we allow every combinatorial version of johnsmith to be a different account? Absolutely not, it’ll be an identity-collision nightmare.” (And when you, the implementer, allow all the combinations? Identity collision nightmare, and no one should be surprised.) So if john.smith@gmail.com has an account at your site, then someone trying to add johnsmith@gmail (Lisa) should fail.
    • Allow people to close accounts without massive manual intervention. I still have to call Hilton on the phone and try to talk someone through fixing this issue. Chat support absolutely cannot help me. Their security policy is that two accounts with different personal names can’t be merged, so I can’t merge the two accounts that use variations on the same email. And I can’t edit the name in the account that Lisa opened, so I can’t do anything to fix it myself!
    • Do not make it impossible to ever fix a bad account. I’ve had several banking accounts opened using my me.com account, and those simply cannot ever be fixed. They are set up, rightly, to require a second factor to reset the password, usually a phone number, and if it’s some dude in Vietnam who’s opened the account, I have no way to come up with his phone number, and I get to just keep marking all the bank notifications as spam, because the bank has linked his whole online identity to that email address. Even if I get hold of the bank (and good luck doing that), they can’t help me because removing the email would effectively cause the user to not exist anymore.

    I honestly think that given the unfortunate trend toward greater and greater enshittification, we’re not going to see a massive come-to-Jesus moment on not pissing off innocent bystanders, mostly because it doesn’t impact the bottom line in any significant way. I like staying at Hilton properties in general, so me boycotting them over their account handling does little to impact them, and takes something away from me.

    Unless somehow someone manages a massive fraud based on email account variations, we’re not going to see a change, and I’ll continue to block accounts for other Joes and the random Lisa Neumann (of Spearfish, SD, let’s not forget!) for the foreseeable future.

    Questions you may be asking

    • But aren’t you by implication exposing your email by saying how the dot thing works in GMail?
      • That horse is out of the barn, down the street, and out on the prairie living its best life at this point. There have been so many breaches where my email has been stolen or leaked that it doesn’t matter anymore. (I can’t think of any other way that Lisa in Spearfish (I can’t be bothered anymore) could have found it.) And GMail seems to fill it in when you type my name in the “to” field, so I’m being shafted automatically anyway.
    • Wow, shouldn’t you go touch grass or something?
      • Yes, and I totally do. It’s just that I come back to my inbox full of “WELCOME TO YOUR ACCOUNT” and “YOUR RESERVATION IS CONFIRMED” and “SexyBabe69420 sent you a wink!” messages and I might as well have not bothered.
    • Have you never done anything to people who do this?
      • Actually, beyond locking them out of the accounts they’ve opened with my identity? No. I have never cancelled a reservation, rerouted a package, or catfished someone on a dating site. I absolutely could have, but I wouldn’t respect myself for doing actual financial damage or hurting an innocent person. Messing with someone on a sex dating site? I’m only disappointing the bots.
  • Azuracast metadata redux

    Summary: all for naught, back to the original implementation, but with some guardrails

    Where we last left off, I was trying to get the LDSwiftEventSource library to play nice with iOS, and it just would not. Every way I tried to convince iOS to please let this thing run failed. Even the “cancel and restart” version was a failure.

    So I started looking at the option of a central server that would push the updates using notifications, and being completely honest, it seemed like an awful lot of work that I wasn’t all that interested in doing, and which would push the release date even further out.

    On reflection, I seemed to remember that despite it being fragile as far as staying connected, the websocket implementation was rock-solid (when it was connected). I went back to that version (thank heavens for git!) and relaunched…yeah, it’s fine. It’s fine in the background. All right, how can I make this work?

    Thinking about it for a while, I also remembered that there was a ping parameter in the connect message from Azuracast, which gave the maximum interval between messages (I’ve found in practice that this is what it means; the messages usually arrive every 15 seconds or so with a ping of 25). Since I’d already written the timer code once for force reboots of the SSE code, it seemed reasonable to leverage it like this:

    • When the server connects, we get the initial ping value when we process the first message successfully.
    • I double that value, and set a Timer that will call a method that just executes connect() again if it pops.
    • In the message processing, as soon as I get a new message, I therefore have evidence that I’m connected, so I kill the extant timer, process the message, and then set a new one.

    This loops, so each time I get a message, I tell the timer I’m fine, and then set a new one; if I ever do lose connectivity, then the timer goes off and I try reconnecting.

    This still needs a couple things:

    • The retries should be limited, and do an exponential backoff.
    • I’m of two minds as to whether I throw up an indicator that I can’t reconnect to the metadata server. On one hand, the metadata going out of sync is something I am going to all these lengths to avoid, so if I’m absolutely forced to do without it, I should probably mention that it’s no longer in sync. On the other hand, if we’ve completely lost connectivity, the music will stop, and that’s a pretty significant signal in itself. It strikes me as unlikely that I’ll be able to stream from the server but not contact Azuracast, so for now I’ll just say nothing.

    I’m running it longer-term to see how well it performs. Last night I got 4 hours without a drop on the no-timer version; I think this means that drops will be relatively infrequent, and we’ll mostly just schedule Timers and cancel them.

    Lockscreen follies

    I have also been trying to get the lock screen filled out so it looks nicer. Before I started, I had a generic lockscreen that had the station logo, name and slug line with a play/pause button and two empty “–:–” timestamps. I now have an empty image (boo) but have managed to set the track name and artist name and the play time. So some progress, some regress.

    The lockscreen setup is peculiar: you set as many of the pieces of data that you know in a

  • More adventures in metadata

    Despite the last set of changes, I still had problems with the iOS app losing its connection to the Azuracast websocket with no way for the code to easily see that had happened, so I dove into the code again, looking for alternatives. I think I’ve got a good solution.

    I’ve added Reachability to the websocket monitor; if I detect a network disconnect, then I force the websocket monitor to disconnect as well so that it is in a known state. When Reachability gets a reconnection signal

  • Spring Elegy: RadioSpiral Spring Equinox Performance

    TL;DR: Giving myself a C on setup, an A on visuals, and an A- on the overall performance.

    As usual with a complicated setup, even though I worked hard for it to be less so this time, I had a major glitch which forced me to lose about 15 minutes of performance time. This performance’s setup was intentionally less complex, but still bit me. I have figured out some things that will keep me from losing the tools to repeat this performance, so that’s something. Anyway. Onward to the rest of this post.

    The setup

    I decided to minimize the possible sources of problems by doing everything on the computer this time. I had problems last time with the interface (mostly because it is TOO BLOODY COMPLICATED) and I decided to eliminate it, so no hardware synths. I also had problems with the iPad staying connected, so I pre-performed one part of the set (erroneously, it turned out, more later) so that I could trigger playback exactly when I wanted it so that the timing of the set would leave me five minutes to hand off to the next artist.

    So first, I recorded the audio from the iPad portion of the performance. I rushed a bit on this, and didn’t realize that I’d set up Garageband to record it in mono. It’s not terrible, just not as good as the stereo original. I’ve made a note to re-record that later in stereo, but I’ll record it as a separate track in GB so I don’t lose the original.

    I then moved it forward in time so that it would end at 55:00; this lets me simply hit play in GB when I start and have the recording start and stop exactly when I want it…if all goes well.

    The rest of the performance was in three parts:

    • A Live session with the base thunderstorm I was using as a continuum through the piece, with added birdsong, bells, and gongs played back as clips.
    • An miRack session (more on why that instead of VCVRack in a sec) that let me fade in and out continuously-running harmonizing lines
    • A second live set continuing the thuderstorm, but using shortwave radio samples, and bringing back one birdsong sample from the other set.

    Everything used the same harmonic basis (this was accidental, not on purpose, but I’ll take it), which let me establish a mood with the first set, fade in the miRack performance, build it up, and then gradually fade it in and out while I perform the clips in the third set. Partway through the miRack session, the pre-recorded GB track starts, also in the same key, allowing it all to stitch together as a coherent whole.

    Visuals

    I decided to use my standard OBS setup for this performance, and it mostly went okay. As a matter of fact, it streamed the audio even though the stream to the station did not work initially (see below). The greenscreen plugin, with black tweaked to transparent, allowed me to overlay the visualizer on the various apps and combinations of them — this wek really well! — and switch things around as I performed.

    I used Ferromagnetic for the visuals during my set; a composite device was visible and I tried that; it seemed to work way better in terms of Ferromagnetic “listening” to the music. This appears to be dynamically created by Audio Hijack.

    After my set, I was able to hook up an Audio Hijack setup that just took the streaming audio from the station (via Music.app) and ran it to the standard output, which allowed me to use both the standard Music.app visualizer (the old-school one; it’s much more visually appealing to me) and Ferromagnetic. I set up “studio mode” so I could watch both sources and crossfade when the visuals were particularly striking in one or the other.

    This worked really well, and I will probably do this again (or Rebekkah will) so that we always have Twitch visuals during all of the performances.

    Issues

    First, my incorrect recording of the iPad performance meant that the soundstage was overly dense, but it still sounded okay, just not as good as it could have.

    Second, I set up ahead of time, and Audio Hijack,which I was using as my funnel for the sound, stopped passing the audio down the path! I struggled with trying to pull blocks out of the path to get it working, but in the end I was forced to reboot the machine in the hope that it would resume working again. Luckily, it did, but this meant that OBS went offline, the music went offline, and I got logged out of Second Life. It took me a significant amount of time to realize I hadn’t gotten back to the concert venue in Second Life after I got the music and OBS running again.

    Third, I didn’t watch my levels, and the overall signal was very hot. I think the final recording doesn’t quite clip but it’s a close thing. Next time I add a limiter to Live, and check the levels more closely with everything running in a test Audio Hijack session beforehand so I can crank the sliders while playing and not need to monitor the overall levels.

    For next time

    • Set the level limits ahead of time so I don’t go quite so loud.
    • Use at least one more machine to offload some of the work. This does move me back toward more complex, but it removes the single point of failure I had this time. This will need some experimentation, but I think visualizers, OBS, Audio Hijack, and probably the performance software have to be on one machine, and Discord and Second Life on another.
    • Have a better checklist. The one I had worked to get me through the performance, but it didn’t have a disaster recovery path. That needs to be thought out and ready as well.
    • Have something ready that can take over if the whole shebang is screwed. No ideas on this yet, but I want to have a “panic button” to switch to a dependable stream from somewhere else if my local setup goes south. I think I can set up a “just for this performance” playlist on Azuracast that I can have ready to trigger if the performance setup dies.
    • Set an alarm to reboot and verify the setup half an hour or 45 minutes (do it and time it) prior to showtime, so that I arrive at my slot with everything ready to go and configured to hit “stream” and have it work.

    Things I did figure out to fix problems from previous sets

    I’ve saved the Live sets this time with all their clips and setup. I still wish I had The Tree, 1964 setup, but it got lost completely. This time, I’ve definitely got all the samples, all the patches, all the clips, all the setup so that I won’t misplace any of it and I can re-perform this piece.

    I’ve also saved the Garageband session and the miRack patch in the same folder, along with my performance notes, so that I can easily re-run everything straight from that folder without a hitch.

    This is all saved on the external disk which is backed up by Backblaze, so it’s as safe as I can make it. I plan to keep doing this for future work so that I am always able to pull up a previous performance and do it again if I want to.

  • Flutter experiences

    TL;DR: Flutter builds are as much fun as Java and Scala ones, and you spend more time screwing with the tools than you do getting anything done. I don’t think I’m going to switch, at least not now.

    As I’ve mentioned before on the blog, I maintain an iOS application for RadioSpiral’s online radio station. The app has worked well and successfully; the original codebase was Swift- Radio-Pro, which works as an iOS app and a MacOS one as well (I have been doing some infrastructure changes to support Azuracast, as previously documented on the blog.)

    We do have several, very polite, Android users who inquire from time to time if I’ve ported the radio station app to Android yet, and I have had to keep saying no, as the work to duplicate the app on Android looked daunting, and nobody is paying me for this. So I’ve been putting it off, knowing that I would have to learn something that runs on Android sooner or later if I wanted to do it at all.

    Randal Schwartz has been telling me for more than a year that I really should look at Dart and Flutter if I want to maintain something that works the same on both platforms, and I just didn’t have the spare time to learn it.

    Come the end of May 2023, and I found myself laid off, so I really had nothing but time. And I was going to need to update the app for IOS 16 anyway at that point (the last time I recompiled it, Xcode still accepted iOS 8 as a target!) and I figured now was as good a time as any to see if I could get it working multi-platform.

    I started looking around for a sample Flutter radio app, and found RadioSai. From the README, it basically does what I want, but has a bunch of other features that I don’t. I figured an app I could strip down was at least a reasonable place to start, so I checked it out of Github and started to work.

    Gearing up

    Setting up the infrastructure Installing Dart and Flutter was pretty easy: good old Homebrew let me brew install flutter to get those in place, and per instructions, I ran flutter doctor to check my installation. It let me know that I was missing the Android toolchain (no surprise there, since I hadn’t installed
    anything there yet). I downloaded the current Android Studio (Flamingo in my case), opened the .dmg, and copied it into /Applications as directed.

    Rerunning flutter doctor, it now told me that I didn’t have the most recent version of the command-line tools. I then fell into a bit of a rabbit hole. Some quick Googling told me that the command line tools should live inside Android Studio. I ferreted around in the application bundle and they were just Not There. I went back to the Android Studio site and downloaded them, and spent a fair amount of time trying to get sdkmanager into my PATH correctly. When I finally did, it cheerfully informed me that I had no Java SDK. So off to the OpenJDK site, and download JDK 20. (I tried a direct install via brew install, but strangely Java was still /usr/bin/java, and I decided rather than tracking down where the Homebrew Java went, I’d install my own where l could keep an eye on it.

    I downloaded the bin.tar.gz file and followed the installation instructions, adding the specified path to my PATH… and still didn’t have a working Java. Hm. Looking in the OpenJDK directory, the path was Contents, not jdk-18.0.1.jdk/Contents. I created the jdk-18.0.1 directory, moved Contents into it and had a working Java! Hurray! But even with dorking around further with the PATH, I still couldn’t get sdkmanager to update the command-line
    tools properly.

    Not that way, this way

    A little more Googling turned up this Stack Overflow post that told me to forget about installing the command-line tools myself, and to get Android Studio to do it. Following those instructions and checking all the right boxes, flutter doctor told me I had the command-line tools, but that I needed to accept some licenses. I ran the command to do that, and finally I had a working Flutter install!


    Almost.

    When I launched Android Studio and loaded my project, it failed with flutter.sdk not defined. This turned out to mean that I needed to add

    flutter.sdk=/opt/homebrew/Caskroom/flutter/ 3.10.5/flutter

    (the location that Homebrew had used to unpack Flutter — thank you find) to local.properties. After that, Gradle twiddled its fingers a while, and declared that the app was ready. (It did want to upgrade the build, and I let it do that.)

    Build, and…

    The option 'android.enableR8' is deprecated. 
    It was removed in version 7.0 of the
    Android Gradle plugin. 
    Please remove it from 'gradle.properties". 

    Okay, I remove it.

    /Users/joemcmahon/Code/radiosai/.dart_tool/ does not exist.

    More Googling, Stack Overflow says Run Tools > Flutter > Pub Get. Doesn’t exist. Okaaaaaay.

    There’s a command line version:

    flutter clean; flutter pub get

    Deleted dart_tool, then recreated it with package_config.json there. Right!

    Back to Android Studio, still confused about the missing menu entry, and build again. Gradle runs, downloads a ton of POMs and

    Couldn't resolve the package 'radiosai' in 'package:radiosai/audio_service/service_locator.dart'.

    Looking one level up, in :app:compileFlutterBuildDebug, Invalid depfile: /Users/joemcmahon/ Code/radiosai/.dart_tool/flutter_build/bff84666834b820d28a58a702f2c8321/ kernel_snapshot.d.

    Let’s delete those and see if that helps…yes, but still can’t resolve
    radiosai. Okay, time for a break.

    Finally, a build!

    Another Google: I wasn’t able to resolve the package because I needed to pub get again.

    Module was compiled with an incompatible version of Kotlin. 

    The binary version of its metadata is 1.8.0, expected version is 1.6.0. Another Google. One of the build Gradle files is specifying Kotlin 1.6…it’s in /android/ build.gradle. Update that to 1.8.10, build…Kotlin plugin is being loaded, good. Couple
    warnings, still going, good.

    BUILD SUCCESSFUL

    Nice! Now, how do I test this thing? Well, there’s Device Manager over on the right, that looks promising. There’s a “Pixel 3a” entry and a “run” button. What’s the worst that could happen?

    Starts up, I have a “running device” that’s a couple inches tall, on its home screen. Hm. Ah, float AND zoom. Cool. Now I realize I have no idea how to run an Android phone, and I don’t see the app.

    https://developer.android.com/studio/run/emulator…nope. Beginning to remember why I didn’t like working in Scala… Gradle upgrade recommended, okay, and now

    Namespace not specified. Please specify a namespace in the module's build.gradle. 

    Specified, still broken…googling…This is a known issue –
    https://github.com/ionic-team/capacitor/issues/6504

    If you are using Capacitor 4, do not upgrade to Gradle 8.


    Yeah, I remember why I stopped liking Scala. git reset to put everything back…

    Execution failed for task:gallery_saver:compileDebugKotlin'. 
    > compileDebugJavaWithJavac task (current target is 1.8) and 'compileDebugKotlin' task
    (current target is 17) 
    jvm target compatibility should be set to the same Java version.
    Consider using JVM toolchain: https://kotl.in/gradle/jvm/toolchain 

    Fix android/app/build.gradle so everyone thinks we’re using Java 17, which uses a different syntax, ugh.

    Fix it again. Same for the Kotlin target too.

    'compileDebugJavaWithJavac' task (current target is 1.8) and 'compileDebugKotlin' task (current target is 17) jvm target compatibility should be set to the same Java version.

    This is apparently actually Gradle 8 still lying around after the (incorrectly) recommended upgrade. Removing ~/ gradle to nuke from orbit. Also killing android/.gradle.


    [Aside: I am used to using git grep to find things, and it is just not finding them in this repo!]

    Cannot read the array length because "" is null

    WHAT.

    Apparently this means that Gradle 8 is still lurking. Yep, the rm ~/.gradle/* didn’t remove everything because of permissions. Yougoddabefuckingkiddingme. Sudo’ed it, relaunched with the fixes I made above. App runs!


    However it stops working after a bit with no reason indicating why. Let’s stop it and restart. Stop button did not stop it; had to quit Android Studio.

    Well. Okay. This is not promising, but let’s see the benefit of using Flutter and check out if the iOS side works. Seems a lot more straightforward, though I’m not doing much in Xcode. cd iOS, launch the simulator (important!), flutter run…and we get the Flutter demo project. Looks like the IOS version wasn’t brought over from the Android side. Why did you even do this.

    Do we all remember that I wanted something that worked on both platforms? Gah.

    So I’m putting Flutter aside, cleaning up the ton of disk space all this extra infrastructure took up, and will maybe come back to it another time.

    But for right now, the amount of work involved is absolutely not worth it, and I’d have to write the damn thing from scratch anyway.

    Maybe I’ll run this through one of the LLMs and see if it can get me a common codebase as a starting point, but I am not sanguine.

  • So long, pemungkah@me.com

    That email address is now officially defunct.

    I created it back when I bought my iPhone 5.

    Years ago, it got leaked, and it has since been used for everything from someone in Canada’s VISA card to a bank account in Vietnam to some bozo’s Marriott account. (Hey David: ppppppppbt.)

    It got so bad that when Apple opened up creating Apple IDs with your own email, I did that, and essentially abandoned the me.com address.

    I used it for political mail for a while, but I’ve gotten disillusioned that letting every random person running for office send me begging letters does any good. (They’re never “I did this because that’s what you sent me to Congress to do”, but “MY OPPONENT HAS MONEY! SEND ME MORE!” — and most of the time it’s futile anyway.) Mostly it was spam, people using it as a test email (especially screw you people: use MailHog or something. How are you going to know if the mail you’re sending looks right?)

    I closed it today. Only had a couple things still left from the downloads I used it for, and I can reinstall them from my primary if I want them.

    A grand experiment, but I’m not sad to never have to deal with it again.

  • Azuracast high-frequency updates, SSE, and iOS background processes

    A big set of learning since the last update.

    I’ve been working on getting the RadioSpiral infrastructure back up to snuff after our Azuracast streaming server upgrade. We really, really did need to do that — it just provides 90% of everything we need to run the station easily. Not having to regenerate the playlists every few weeks is definitely a win, and we’re now able to do stuff like “long-play Sunday”, where all of the tracks are long-players of a half-hour or more.

    But there were some hitches, mostly in my stuff: the iOS app and the now-playing Discord bot. Because of reasons (read: I’m not sure why), the Icecast metadata isn’t available from the streaming server on Azuracast, especially when you’re using TLS. This breaks the display of artist and track on the iOS app, and partially breaks the icecast-monitor Node library I was using to do the now-playing bot in Discord.

    (Side note: this was all my bright idea, and I should have tested the app and bot against Azuracast before I proposed cutting over in production, but I didn’t. I’ll run any new thing in Docker first and test it better next time.)

    Azuracast to the rescue

    Fortunately, Azuracast provides excellent now-playing APIs. There a straight-up GET endpoint that returns the data, and two event-driven ones (websockets and SSE). The GET option depends on you polling the server for updates, and I didn’t like that on principle; the server is quite powerful, but I don’t want multiple copies of the app hammering it frequently to get updates, and it was inherently not going to be close to a real-time update unless I really did hammer the server.

    So that was off the table, leaving websockets and SSE, neither of which I had ever used. Woo, learning experience. I initially tried SSE in Node and didn’t have a lot of success with it, so I decided to go with websockets and see how that went.

    Pretty well actually! I was able to get a websocket client running pretty easily, so I decided to try it that way. After some conferring with ChatGPT, I put together a library that would let me start up a websocket client and run happily, waiting for updates to come in and updating the UI as I went. (I’ll talk about the adventures of parsing Azuracast metadata JSON in another post.)

    I chose to use a technique that I found in the FRadioPlayer source code, of declaring a public static variable containing an instance of the class; this let me do

    import Kingfisher
    import ACWebSocketClient
    
    client = ACWebSocketClient.shared
    ...
    tracklabel.text = client.status.track
    artistlabel.text = client.status.artist
    coverImageView.kf.getImage(with:client.status.artURL)

    (Kingfisher is fantastic! Coupled with Azuracast automatically extracting the artwork from tracks and providing a URL to it, showing the right covers was trivial. FRadioPlayer uses the Apple Music cover art API to get covers, and given the, shall we say, obscure artists we play, some of the cover guesses it made were pretty funny.)

    Right. So we have metadata! Fantastic. Unfortunately, the websocket client uses URLSessionWebSocketTask to manage the connection, and that class has extremely poor error handling. It’s next to impossible to detect that you’ve lost the connection or re-establish it. So It would work for a while, and then a disconnect would happen, and the metadata would stop updating.

    Back to the drawing board. Maybe SSE will work better in Swift? I’ve written one client, maybe I can leverage the code. And yes, I could. After some searching on GitHub and trying a couple of different things, I created a new library that could do Azuracast SSE. (Thank you to LaunchDarkly and LDSwiftEventSource for making the basic implementation dead easy.)

    So close, but so far

    Unfortunately, I now hit iOS architecture issues.

    iOS really, really does not want you to run long-term background tasks, especially with the screen locked. When the screen was unlocked, the metadata updates went okay, but as soon as the screen locked, iOS started a 30-second “and what do you think you’re doing” timer, and killed the metadata monitor process.

    I tried a number of gyrations to keep it running and schedule and reschedule a background thread, but if I let it run continuously, even with all the “please just let this run, I swear I know what I need here” code, iOS would axe it within a minute or so.

    So I’ve fallen back to a solution not a lot better than polling the endpoint: when the audio starts, I start up the SSE client, and then shut it down in 3 seconds, wait 15 seconds, and then run it again. When audio stops, I shut it off and leave it off. This has so far kept iOS from nuking the app, but again, I’m polling. Yuck.

    However, we now do have metadata, and that’s better than none.

    On the other hand…

    On the Discord front, however, I was much more successful. I tried SSE in Node, and found the libraries wanting, so I switched over to Python and was able to use sseclient to do the heavy lifting for the SSE connection. It essentially takes an SSE URL, hooks up to the server, and then calls a callback whenever an event arrives. That was straightforward enough, and I boned up on my Python for traversing arbitrary structures — json.loads() did a nice job for me of turning the complicated JSON into nested Python data structures.

    The only hard bit was persuading Python to turn the JSON struct I needed to send into a proper query parameter. Eventually this worked:

    subs = {
            "subs": {
                f"station:{shortcode}": {"recover": True}
            }
         }
    
    json_subs = json.dumps(subs, separators=(',', ':'))
    json_subs = json_subs.replace("True", "true").replace("False", "false")
    encoded_query = urllib.parse.quote(json_subs)

    I pretty quickly got the events arriving and parsed, and I was able to dump out the metadata in a print. Fab! I must almost be done!

    But no. I did have to learn yet another new thing: nonlocal in Python.

    Once I’d gotten the event and parsed it and stashed the data in an object, I needed to be able to do something with it, and the easiest way to do that was set up another callback mechanism. That looked something like this:

    client = build_sse_client(server, shortcode)
    run(client, send_embed_with_image)

    The send_embed_with_image callback puts together a Discord embed (a fancy message) and posts it to our Discord via a webhook, so I don’t have to write any async code. The SSE client updates every fifteen seconds or so, but I don’t want to just spam the channel with the updates; I want to compare the new update to the last one, and not post if the track hasn’t changed.

    I added a method to the metadata object to compare two objects:

    def __eq__(self, other) -> bool:
        if not isinstance(other, NowPlayingResponse):
            return False
        if other is None:
            return False
        return (self.dj == other.dj and
                self.artist == other.artist and
                self.track == other.track and
                self.album == other.album)

    …but I ran into a difficulty trying to store the old object: the async callback from my sseclient callback couldn’t see the variables in the main script. I knew I’d need a closure to put them in the function’s scope, and I was able to write that fairly easily after a little poking about, but even with them there, the inner function I was returning still couldn’t see the closed-over variables.

    The fix was something I’d never heard of before in Python: nonlocal.

    def wrapper(startup, last_response):
        def sender(response: NowPlayingResponse):
            nonlocal startup, last_response
            if response == last_response:
                return
    
            # Prepare the embed data
            local_tz = get_localzone()
            start = response.start.replace(tzinfo=local_tz)
            embed_data = {
                "title": f"{response.track}",
                "description": f"from _{response.album}_ by {response.artist} ({response.duration})",
                "timestamp": start,
                "thumbnail_url": response.artURL,
            }
    
            # Send to webhook
            send_webhook(embed_data)
    
            startup = False
            last_response = response
    
        return sender

    Normally, all I’d need to do would be have startup and last_response in the outer function’s argument list to have them visible to the inner function’s namespace, but I didn’t want them to just be visible: I wanted them to be mutable. Adding the nonlocal declaration of those variables does that. (If you want to learn more about nonlocal, this is a good tutorial.)

    The Discord monitor main code now looks like this:

    startup = True
    last_response = None
    
    # Build the SSE client
    client = build_sse_client(server, shortcode)
    
    # Create the sender function and start listening
    send_embed_with_image = wrapper(startup, last_response)
    run(client, send_embed_with_image)

    Now send_embed_with_image will successfully be able to check for changes and only send a new embed when there is one.

    One last notable thing here: Discord sets the timestamp of the embed relative to the timezone of the Discord user. If a timezone is supplied, then Discord does the necessary computations to figure out what the local time is for the supplied timestamp. If no zone info is there, then it assumes UTC, which can lead to funny-looking timesstamps. This code finds the timezone where the monitor code is running, and sets the timestamp to that.

    from tzlocal import get_localzone
    
    local_tz = get_localzone()
    start = response.start.replace(tzinfo=local_tz)

    And now we get nice-looking now-playing info in Discord:

    Shows two entries in a Discord channel, listing track title in bold, album name in italics, and artist name, with a start time timestamp and a thumbnail of the album cover.

    Building on this

    Now that we have a working Python monitor, we can now come up with a better solution to (close to) real-time updates for the iOS app.

    Instead of running the monitor itself, the app will register with the Python monitor for silent push updates. This lets us offload the CPU (and battery) intensive operations to the Python code, and only do something when the notification is pushed to the app.

    But that’s code for next week; this week I need to get the iOS stopgap app out, and get the Python server dockerized.

  • Swift Dependency Management Adventures

    I’m in the process of (somewhat belatedly) upgrading the RadioSpiral app to work properly with Azuracast.

    The Apple-recommended way of accessing the stream metadata just does not work with Azuracast’s Icecast server – the stream works fine, but the metadata never updates, so the app streams the music but never updates the UI with anything.

    Because it could still stream (heh, StillStream) the music, we decided to go ahead and deploy. There were so many other things that Azuracast fixed for us that there was no question that decreasing the toil for everyone (especially our admin!) was going to make a huge difference.

    Addressing the problem

    Azuracast supplies an excellent now-playing API in four different flavors:

    • A file on the server that has now-playing data, accessible by simply getting the contents of the URL. This is only updated every 30 seconds or so, which isn’t really good enough resolution, and requires the endpoint be polled.
    • An API that returns the now-playing data as of the time of the request via a plain old GET to the endpoint. This is better but still requires polling to stay up to date, and will still not necessarily catch a track change unless the app polls aggressively, which doesn’t scale well.
    • Real-time push updates, either via SSE over https or websocket connection. The push updates are less load on the server, as we don’t have to go through session establishment every time; we can just use the open connection and write to it. Bonus, the pushes can happen at the time the events occur on the server, so updates are sent exactly when the track change occurs.

    I decided that the websocket API was a little easier to implement. With a little help from ChatGPT to get me an initial chunk of code (and a fair amount of struggling to figure out the proper parameters to send for the connection request),

    I used a super low-rent SwiftUI app to wrap AVAudioSession and start up a websocket client separately to manage the metadata; that basically worked and let me verify that the code to monitor the websocket was working.

    I was able to copy that code inside of FRadioPlayer, the engine that RadioSpiral uses to do the streaming, but then I started running into complications.

    Xcode, Xcode, whatcha gonna do?

    I didn’t want to create an incompatible fork of FRadioPlayer, and I felt that the code was special-purpose enough that it wasn’t a reasonable PR to make. In addition, it was the holidays, and I didn’t want to force folks to have to work just because I was.

    So I decided to go a step further and create a whole new version of the FRadioPlayer library, ACRadioPlayer, that would be specifically designed to be used only with Azuracast stations.

    Initially, this went pretty well. The rename took a little extra effort to get all the FRadio references switched over to ACRadio ones, but it was fairly easy to get to a version of the library that worked just like FRadioPlayer, but renamed.

    Then my troubles began

    I decided that I was going to just include the code directly in ACRadioPlayer and then switch RadioSpiral to the new engine, so I did that, and then started trying to integrate the new code into ACRadioPlayer. Xcode started getting weird. I kept trying to go forward a bit at a time — add the library, start trying to include it into the app, get the fetch working…and every time, I’d get to a certain point (one sample app working, or two) and then I’d start getting strange errors: the class definition I had right there would no longer be found. The build process suddenly couldn’t write to the DerivedData directory anymore. I’d git reset back one commit, another, until I’d undone everything. Sometimes that didn’t work, and I had to throw away the checkout and start over. The capper was “Unexpected error”, with absolutely nothing to go on to fix it.

    Backing off and trying a different path

    So I backed all the way out, and started trying to build up step-by-step. I decided to try building the streaming part of the code as a separate library to be integrated with ACRadioPlayer, so I created a new project, ACWebSocketClient, and pulled the code in. I could easily get that to build, no surprise, it had been building, and I could get the tests of the JSON parse to pass, but when I tried to integrate it into ACRadioPlayer using Swift Package Manager, I was back to the weird errors again. I tried for most of a day to sort that out, and had zero success.

    The next day, I decided that maybe I should follow Fatih’s example for FRadioPlayer and use Cocoapods to handle it. This went much better.

    Because of the way Cocoapods is put together, just building the project skeleton actually gave me some place to put a test app, which was much better, and gave me a stepping stone along the way to building out the library. I added the code, and the process of building the demo showed me that I needed to do a few things: be more explicit about what was public and what was private, and be a little more thoughtful about the public class names.

    A couple hours work got me a working demo app that could connect to the Azuracast test station and monitor the metadata in real time. I elected to just show the URL for the artwork as text because actually fetching the image wasn’t a key part of the API.

    I did then hit the problem that the demo app was iOS only. I could run it on MacOS in emulation mode, but I didn’t have a fully-fledged Mac app to test with. (Nor did I have a tvOS one.) I tried a couple variations on adding a new target to build the Mac app, but mostly I ended up breaking the work I had working, so I eventually abandoned that.

    I then started working step by step to include the library in ACRadioPlayer. FRadioPlayer came with an iOS apps (UIKit and SwiftUI), a native Mac app, and a tvOS app. I carefully worked through getting the required versions of the OS to match in the ACWebSocketClient podspec, the ACRadioPlayer Podfile, and the ACRadioPlayer Xcode project. That was tedious but eventually successful.

    Current status

    I’ve now got the code properly pulled in, compatible with the apps, and visible to each of the apps. I’ll now need to pull in the actual code that uses it from the broken repo (the code was fine, it was just the support structures around it that weren’t) and get all the apps working. At that point I can get both of the libraries out on Cocoapods, and then start integrating with RadioSpiral.

    In general, this has been similar to a lot of projects I’ve worked on in languages complex enough to need an IDE (Java, Scala, and now Swift): the infrastructure involved in just getting the code to build was far more trouble to work with and maintain, and consumed far more time, than writing the code itself.

    Writing code in Perl or Python was perhaps less flashy, but it was a lot simpler: you wrote the code, and ran it, and it ran or it didn’t, and if it didn’t, you ran it under the debugger (or used the tests, or worse case, added print statements) and fixed it. You didn’t have to worry about whether the package management system was working, or if something in the mysterious infrastructure underlying the applications was misconfigured or broken. Either you’d installed it, and told your code to include it, or you hadn’t. Even Go was a bit of a problem in this way; you had to be very careful in how you got all the code in place and that you had gotten it in place.

    Overall, though, I”m pretty happy with Cocoapods and the support it has built in. Because FRadioPlayer was built using Cocoapods as its package management, I’m hoping that the process of integrating it into RadioSpiral won’t be too tough.

  • So what am I doing now? 2024 edition

    After my sudden layoff from ZipRecruiter in 2023, I decided that I needed to step back and think about things. The job market was (and end of 2024, remains) abysmal. I did a couple interviews but me and Leetcode don’t get along, and I honestly am not convinced that watching me attempt to code under utterly unrealistic time constraint is a really goofy way to see if I can write good, maintainable code on a schedule.

    So after about 3 months of that, I decided that I would look at my options and see what I could do that wasn’t necessarily just another programming job.

    I’m currently doing a number of things, some of which are bringing in income, though not lots of it, and others which are moving other parts of my life ahead.

    • I auditioned for, and got, a job as one of the editors for the Miskatonic University Podcast. I’ve certainly been doing audio editing for a long time; seemed only reasonable to get paid for it. Podcast editing is a detail-oriented task, and those are the kind I enjoy. It’s a real pleasure to take the raw audio and produce a professional result. Dave and Bridgett are, of course, very professional themselves and make the job considerably easier than it could be, but the audio still needs that attention that cleans up the dead space, removes the pauses and um’s and er’s, tidily clips out those small flubs, and turns out something that is a pleasure to listen to. And I get to use my cartoon sound effects library!
    • I’ve edited a Call of Cthulhu scenario and from that have a repeat customer for whom I’m now editing a full game manual. This is exceptionally pleasant though intense work. I’ve been able to help with making the prose sing, clarifying, and prompting for how the author can make the product better. I think this is developmental editing plus line edits and maybe collaboration, and honestly I think I may be undercharging significantly, but I want to get a few successful edits into my portfolio before I start asking for more money.
    • I’m learning Swift 5 and SwiftUI. I had an all-hands-on-deck (okay, all-me-on-deck, I’m the only one working on it) moment last year with the RadioSpiral app – it had been working beautifully, and I had benignly neglected it for about 3 years…only to have Apple drop me a “hey, you quit updating this, so we’re gonna drop it if you don’t do an update in 90 days” email. So I had to bring it up to Swift 5 and Xcode 15 pronto. Some tamasha with “we don’t know if you’re allowed to stream this, prove it” from Apple Review was actually the hard part of getting it up, but I managed with a couple weeks to spare. (A lot of that was needing to noodge Mike to get me a “yes, I run the station, yes this is official, yes, we have permission” letter to upload. Requesting help from Apple Review after repeated rejections helped a ton because they couldn’t tell me exactly what the problem was, and me revising the code wasn’t going to work. I got a phone call, a clarification, and we were back in business.) Now looking at a new version using SwiftUI sometime soon.
    • Started working on replacing our old broadcast setup with Azuracast. We’ll probably switch over before the end of the year. Azuracast has a ton of stuff that we really want and will let us simplify operations significantly. The APIs will net me pull in more info in the RadioSpiral app (notably the real current DJ and play history…up to a year!) We’re almost there.
    • Started working on several other Swift projects, details still under wraps until I’m done. At least one of the projects is a brand-new thing that I needed badly; I’m hoping that other people doing that same thing will realize they needed it too, but just didn’t think of it, and will buy a copy. Another is a niche thing which I think will be convenient to online writer’s critique groups, and one other is a special tide-clock app just for me that maybe others will enjoy too.
    • Because I’ve mostly forgone income this year, I’ll be able to roll over a chunk of money from the 401k to my Roth IRA. I’ll still need to pay taxes on it, but at least it will be now while my income is effectively zero and I can minimize the tax hit.

    Next year? Well, we’ll have to see.

    I did need some rest, badly; I was still fighting the combined MRSA/Eichenella corrodens infection (as featured on House; never have a disease featured on House) last year until 3 months after my layoff, and wasn’t clean until then. Spending the sabbatical learning things and seeing about options other than coding was useful, but I certainly wouldn’t mind a real income again.

    I’m planning to look at new things in the new year, but for now, I’m trying to finish off this year’s projects, get our retirement money on a good footing…and then we’ll see. I think I’ll need to pick up something with a dependable, above-poverty-level paycheck, but what that will be I don’t know.