Category: Uncategorized

  • So what am I doing now? 2024 edition

    After my sudden layoff from ZipRecruiter in 2023, I decided that I needed to step back and think about things. The job market was (and end of 2024, remains) abysmal. I did a couple interviews but me and Leetcode don’t get along, and I honestly am not convinced that watching me attempt to code under utterly unrealistic time constraint is a really goofy way to see if I can write good, maintainable code on a schedule.

    So after about 3 months of that, I decided that I would look at my options and see what I could do that wasn’t necessarily just another programming job.

    I’m currently doing a number of things, some of which are bringing in income, though not lots of it, and others which are moving other parts of my life ahead.

    • I auditioned for, and got, a job as one of the editors for the Miskatonic University Podcast. I’ve certainly been doing audio editing for a long time; seemed only reasonable to get paid for it. Podcast editing is a detail-oriented task, and those are the kind I enjoy. It’s a real pleasure to take the raw audio and produce a professional result. Dave and Bridgett are, of course, very professional themselves and make the job considerably easier than it could be, but the audio still needs that attention that cleans up the dead space, removes the pauses and um’s and er’s, tidily clips out those small flubs, and turns out something that is a pleasure to listen to. And I get to use my cartoon sound effects library!
    • I’ve edited a Call of Cthulhu scenario and from that have a repeat customer for whom I’m now editing a full game manual. This is exceptionally pleasant though intense work. I’ve been able to help with making the prose sing, clarifying, and prompting for how the author can make the product better. I think this is developmental editing plus line edits and maybe collaboration, and honestly I think I may be undercharging significantly, but I want to get a few successful edits into my portfolio before I start asking for more money.
    • I’m learning Swift 5 and SwiftUI. I had an all-hands-on-deck (okay, all-me-on-deck, I’m the only one working on it) moment last year with the RadioSpiral app – it had been working beautifully, and I had benignly neglected it for about 3 years…only to have Apple drop me a “hey, you quit updating this, so we’re gonna drop it if you don’t do an update in 90 days” email. So I had to bring it up to Swift 5 and Xcode 15 pronto. Some tamasha with “we don’t know if you’re allowed to stream this, prove it” from Apple Review was actually the hard part of getting it up, but I managed with a couple weeks to spare. (A lot of that was needing to noodge Mike to get me a “yes, I run the station, yes this is official, yes, we have permission” letter to upload. Requesting help from Apple Review after repeated rejections helped a ton because they couldn’t tell me exactly what the problem was, and me revising the code wasn’t going to work. I got a phone call, a clarification, and we were back in business.) Now looking at a new version using SwiftUI sometime soon.
    • Started working on replacing our old broadcast setup with Azuracast. We’ll probably switch over before the end of the year. Azuracast has a ton of stuff that we really want and will let us simplify operations significantly. The APIs will net me pull in more info in the RadioSpiral app (notably the real current DJ and play history…up to a year!) We’re almost there.
    • Started working on several other Swift projects, details still under wraps until I’m done. At least one of the projects is a brand-new thing that I needed badly; I’m hoping that other people doing that same thing will realize they needed it too, but just didn’t think of it, and will buy a copy. Another is a niche thing which I think will be convenient to online writer’s critique groups, and one other is a special tide-clock app just for me that maybe others will enjoy too.
    • Because I’ve mostly forgone income this year, I’ll be able to roll over a chunk of money from the 401k to my Roth IRA. I’ll still need to pay taxes on it, but at least it will be now while my income is effectively zero and I can minimize the tax hit.

    Next year? Well, we’ll have to see.

    I did need some rest, badly; I was still fighting the combined MRSA/Eichenella corrodens infection (as featured on House; never have a disease featured on House) last year until 3 months after my layoff, and wasn’t clean until then. Spending the sabbatical learning things and seeing about options other than coding was useful, but I certainly wouldn’t mind a real income again.

    I’m planning to look at new things in the new year, but for now, I’m trying to finish off this year’s projects, get our retirement money on a good footing…and then we’ll see. I think I’ll need to pick up something with a dependable, above-poverty-level paycheck, but what that will be I don’t know.

  • OCLP experience update: back to Ventura

    I’ve been running OCLP (the Open Core Legacy Patcher) on my 2012 MacBook Pro; recently I ran softwareupdate from the command line and accidentally upgraded to Sonoma from Ventura. The experience was definitely mixed.

    It handled it mostly okay for day-to-day work. Xcode 15.4 ran fine. Where I hit a problem, though, was when I tried running Azuracast under Docker. The machine ran insanely hot, so hot that it started throwing screen glitches. Rather than burn out my GPU, I elected to downgrade to Ventura. Here’s how that went. (Spoiler: a lot of toil.)

    Getting Ventura back

    The first step was to get Ventura back on the machine, This wasn’t particularly hard; I just needed to follow the standard OCLP procedure, but install to a new partition on my internal SSD. This cut the amount of space down by about another 200GB, but went well. I was able to install and have Ventura in good shape in a couple hours.

    Retrieving the data from Sonoma

    Here’s where we started having problems.

    I had hoped that I’d be able to use Migration Assistant to bring the data back from Sonoma to Ventura, but no dice. Migration Assistant looked at the Sonoma disk, said, “nuh-uh, I ain’t downgrading” and refused to even consider mounting the disk. This meant I’d have to port everything back from that disk to the new one manually.

    My first try was to rsync it over. This failed because now I didn’t have enough space to have two copies of the data. I deleted the data from the Ventura install and tried again. This time I created a service account with admin privileges, and copied ~/Library over from Sonoma. This didn’t seem to work either; most particularly iCloud login was broken.

    Fixing the broken copy

    After thinking about it a while, I decided that the problem was probably permissions. From the service account, I wiped the Ventura copy of my account again, and copied in two steps. First, I copied ~/Library over, then chown‘ed it to my user on the Ventura disk. I logged in as myself, set up iCloud, and all was good. Now came the question of moving the data without filling the disk.

    I was able to use rsync (from the service account again), but this time I added --remove-source-files and --ignore-existing to the command. This copied only files I didn’t already have on Ventura from Sonoma, deleting them as they transferred. After this finished, I logged in to my Ventura account, iCloud was okay, and all my files were back.

    I then rebooted into the installer again, removed the Sonoma partitions, and was ready to go.

    I’m now currently running Azuracast under Docker, and having it ingest the RadioSpiral tracks from my iTunes library. It’s running warm, but not hot, and no more screen glitching. I’ll probably leave it on Ventura unless someone else running the same machine gets good performance from Sequoia.

    And I can always run Linux if all else fails.

  • Keep OCLP up to date, or recovering from an overenthusiastic software update

    I had the misfortune to have to learn this (and how to fix it), so I’m documenting it here for the next person who does this to themselves.

    We open on a Macbook Air 2012, updated to Sonoma 14.0 with OCLP. All is well. The machine runs…okay. It would probably be happier on Ventura, or Monterey, but because of reasons, I had updated it all the way to Sonoma so that there was no question of compatibility with the primary machine it was replacing while I was on travel.

    I used the machine on travel, and definitely found that it’s not quite up to the task at Sonoma, to be dealt with at a different time. The real issue was that I did not update OCLP to 1.5.0 as soon as that release was out. This meant that when Sonoma 14.5 was available and the machine auto-updated…it broke.

    Symptoms were that the trackpad and keyboard worked right up until login completed, and then did not work at all. Couldn’t run the browser to download the OCLP update, nada.

    Normally, I’d shrug, erase the disk, and reinstall, but this was a bit of an issue because I had files that I wanted to get off this device. (Yes, I know, I should have had backups, but I worked on the plane while I had no internet, and I hadn’t had the machine up long enough for Backblaze to finish a new incremental before the software update ran.)

    I tried a number of things: safe mode, doing a reinstall from Internet Recovery (we’d like to install El Capitan! sorry, your disk isn’t usable because I don’t recognize this filesystem)…and got nowhere. This was beginning to look bad.

    Then I remembered I had a Carbon Copy Cloner backup on one of my externals. Hm. Thought this was the Air’s Sonoma, but seems to be Mojave from my Macbook Pro 2012. Trying to boot it can’t be worse than what’s going on now, so booted, held down option, and there was “Mojave” in the picker list. Chose it, crossed my fingers…and it booted!

    I was able to download the latest OCLP (1.5.0), install it, run it, reinstall OCLP to the Air’s disk, and most importantly, reinstall the root patches. After that it was clear sailing: I shut down, restarted from the Air’s internal disk, and I was back in business on Sonoma 14.5.

    The primary, most important lesson: run OCLP periodically and make sure it’s up to date! If I had done that as soon as I got home, the 14.6 upgrade would have Just Happened and everything would have been fine.

    The secondary, also important lesson: disable automatic updates on your OCLP machines, and don’t update until you’ve verified that the most recent OCLP is installed and handles the version of the OS that you’ll be installing manually when you’re ready.

    The third lesson: after you have a working install of whatever OS with OCLP, make a bootable backup immediately. If I’d had that to hand, it would have take 15-20 minutes to fix the issue. As it was, I spent almost a full afternoon trying to fix the installation before trying the Mojave backup that wasn’t even for the affected machine. (I think I used up my luck for a couple months on that one.)

  • OCLP experiences on a 2012 MacBook Pro

    TL;DR

    OCLP works fine, if you don’t forget your damn firmware password. If you did, persist even if the Apple Store tells you your machine is “obsolete”. Mobile Kangaroo San Jose rules. Oakridge Apple store, not so much.

    The history

    We’ve owned a 2012 MacBook Pro 15″ since about 2014, when Shymala finally outgrew her 2009 MacBook Air and needed a faster, bigger, and better machine. We chose the 2012 MBP because everything was still upgradable (memory and disk). She used it for a good six or seven years before she wanted to upgrade to something lighter and (most importantly) faster — OS and application upgrades had vastly slowed it down, and it ran hot most of the time.

    First upgrade

    The machine sat around for a couple years until I got let go from WhiteHat and I realized I had no personal computer at all. (Resulting in the loss of a lot of my personal files, sadly, because I did not learn the canonical lesson: a work computer is not “yours”.) So I got the machine out of storage, and yeah, it was slow, and not up to what I wanted to do with it. I upgraded the memory to the theoretical max (16GB, which it supports, just not officially), and swapped out the drive for a Crucial 2 TB SSD.

    It was like a brand new machine! It ran the then-current OS perfectly. It did run a bit hot sometimes, but it was fast enough to compete with her old machine, and nearly as good an experience as the new laptop from ZipRecruiter (also an Intel machine in the early days of my tenure).

    We had added a firmware lock to the machine because there was some concern about it getting stolen while Shymala was living in Brooklyn, and we wrote it down. Or so we thought. OS updates were installing, everything was fine up to Catalina. The machine was left behind on updates past that, but generally this wasn’t an issue, as it was still doing what was needed.

    The first stirrings of trouble were when Big Sur came out, and the new version of Xcode required it. This made the machine less useful by quite a bit. I could still use it for streaming and music production, and it ran Second Life fine; Photos worked, Acorn worked, so basically it was still great for everything but iOS development. I didn’t really need to do any development at the time, as the RadioSpiral app was working and stable, so I left it.

    Come 2023, I was laid off from ZipRecruiter. They were nice enough to let us all keep our laptops, and in the interim I’d gotten an M1 upgrade, so I was okay for staying up to date with the OS and Xcode.

    The scramble and the block

    This came in handy in October 2023, when I got a note from Apple that said, essentially, “dude, you’re not updating your app, and if you don’t do it now, we’re going to remove it. You have 90 days.”

    And I haven’t updated the app since Swift 3. Oops. I spent a couple weeks catching the app up to date and in the process I realized that I now had only one machine that I could do the work on. I needed to use Xcode 15, and the minimum OS was Ventura, two past Catalina. I was okay, because I had one machine that could run Xcode 15, but I thought I’d better see if I could come up with a backup. If something happened to the main machine, I was going to be SOL.

    Fortunately, Open Core Legacy Patcher was now available. We’d used it once successfully to update a 2015 Air all the way to Sonoma — ran Word and OBS beautifully, and that’s what we needed it for — but I didn’t want to waste the disk space it’d take to run Xcode on it (it only has a 256 GB SSD. On this machine, that is upgradeable, but I wasn’t feeling like doing the delicate surgery necessary, and it was really supposed to be dedicated to Shymala’s work while on travel. I am not a speedy iOS developer, and sharing a laptop is never a great experience.

    So now I needed to unlock the firmware on the 2012 Mac. At this point I discover that both I cannot remember it and all the records of what I think it is are wrong.

    I go to the Apple website, and check with Apple support on what my options are. They tell me I need the original receipt. Well. It’s 12 years later and multiple moves, and I definitely do not have a copy. Fortunately my tier-1 Apple support rep was able to push this up the chain and managed to find the purchase order in the archives. (Side note: Apple level 1 support reps — at least the ones on chat — rule.)

    Good, we’ve crossed that hurdle. I set up an appointment at the Oakridge Apple Store — they’re in the neighborhood, so they’re by far the easiest to work with — and took the machine in. The receipt was fine, and the tech tried a couple time to run the unlock software, and couldn’t get it to work. He declared that the machine was obsolete, and that Apple couldn’t help.

    Well. That was a bummer. I went home, and put the machine aside for a while. A couple months later, when it was clear I’d be traveling to Malaysia, I came back and said to myself, “okay, level 1 support was sure this would work. I should try again, but somewhere else.” I chatted with level 1 again, and my rep was enthusiastic about getting it unlocked. She scheduled a call for level 2 to call be back…and I missed the call because of another meeting. No problem, I thought, I’ll call back.

    So I call back. Level 1 phone support is not the same as level 1 chat support. I’m sure the rep was doing her job as she was supposed to, but essentially she blocked me from level 2, told me my machine was obsolete, and basically to buzz off and stop wasting her time.

    This seemed like a major set back but I had another option up my sleeve.

    A little bit previously, we’d had Shymala’s LED Cinema Display fail to come back on after a power surge, despite it being post the surge protector. We’d taken that to Oakridge, and they declared it dead, and that it’d have to be replaced. We decided to try an indie shop just to see if they could do something the Apple Store couldn’t. San Jose Mobile Kangaroo was the closest non-Apple store, and we figured that if they could fix it it’d definitely be better than spending $4K to replace the monitor, or take a chance on someone else’s used one. Their techs were able to get it reset and working again just fine in less than a day, and it didn’t seem like they’d had any trouble at all.

    So the firmware reset seemed like something to try them for. Worst case they couldn’t do it either, and I wouldn’t be in any worse shape. Took it in, and by golly, they were able to reset it right after Apple gave them the OK. (I suspect it was because they used Ethernet directly instead of via a USB dongle, which was how the Oakridge store tried it.) At any rate, I had a fixed machine. It did run me $125, but that’s a ton cheaper than buying another machine that could run newer OSes.

    OCLP experience

    OCLP was not seamless on the 2012 machine. On the 2015 machine, it was dead easy: download the installer for the OS, run OCLP to build the installer USB, boot from the installer, install, machine reboots itself a few times, done.

    On the 2012 Mac, it was…bumpy.

    The USB stick built fine, but when I booted, I ended up at the recovery screen. Tried in safe mode. Recovery screen. I tried a couple other different things and ended up crashing my Catalina install to the point that I’d broken the boot record on the HD and had to use Internet Recovery to reinstall Catalina.

    Okay, well. Not great. Got the machine back up and tried again, this time with Big Sur, as I though maybe I’d tried to go too far too fast…still back at the recovery screen. Well, what the hell. Let’s try recovery. Pick an account, password…and “Install Big Sur from USB”. Well, shit. I could have tried this before! Okay. Chose that option — and Big Sur starts installing, and succeeds! Woo hoo!

    Conclusion

    I’ve now rebuilt the Ventura installer and followed the instructions, going through recovery again, and Ventura is now installing on the 2012 Mac. I’m going to finish up, port everything from the M1 Mac over to the 2012 one, verify it’s all working, and then I can delete the old Catalina partitions and just use Ventura on the new machine. [Note: while writing this, we’re on the third reboot after the initial install, all seems to be going okay. Fourth boot while writing that sentence, but I’m pretty optimistic]

    I probably could have gone all the way to Sonoma, but I’m going to stay backlevel for now. My strategy on the 2012 Mac is going to be “update as little as possible other than security fixes” unless something pushes me forward (most likely Xcode).

    I’ll have my backup machine, and I’ll feel safe taking the M1 with me on travel — and if at some later point I can’t upgrade the Intel Mac further, it’ll work fine as a Linux or BSD machine now that it’s unlocked.

    Also: if I do a firmware lock again, that goes straight into 1Password, which would have prevented 90% of all these gyrations in the first place. $125 is a bit expensive to learn that lesson!

  • Leveraging an outage to build community and consensus

    We had our first extended outage at RadioSpiral this weekend, and I’m writing about it here to point out how a production incident can help bring a team together not only technically, but as a group.

    The timeline

    On Sunday evening, about an hour before Tony Gerber’s Sunday show, RadioSpiral went offline. Normally, the AirTime installation handles playing tracks from the station library when there’s no show, and it played a track…and then stopped. Dead air.

    The station has been growing; we’ve added two new DJs, doubling the number of folks who are familiar with servers, Linux, etc. Everyone who was available (pretty much everyone but our primary sysadmin, who set everything up and who is in the UK) jumped in to try to see what was up. We were able to log in to AirTime and see that it was offline, but not why; we tried restarting the streaming service, and the server itself, but couldn’t get back online.

    We did figure out that we could connect to the master streaming port so that Tony could do his show, but after that, we were off the air for almost 12 hours, until our primary sysadmin was up, awake, and had finished his work day.

    A couple hours of investigation on his part did finally determine that LetsEncrypt had added a RewriteRule to the Airtime configuration that forced all URLs to HTTPS; unfortunately it needs HTTP for its internal APIs and that switchover broke it. Removing the rule and restarting the server got us back on line, and our very patient and faithful listeners trickled back in over the day.

    Now what?

    While we’d not been able to diagnose and fix the problem, we had been chatting in the staff channel on the RadioSpiral Discord server, and considering the larger issues.

    RadioSpiral is expected to be up 24/7, but we’re really running it more like a hobby than a business. This is reasonable, because it’s definitely not making any of us money, at least not directly. (Things like sales of albums by our DJs, etc., are their business and not part of the station’s remit.) This means that we can have situations like this one, where the station could be offline for an extended amount of time without recourse.

    Secondarily, RadioSpiral is small. We have three folks who are the core of actual station operations, and their contributions are very much siloed. If something should happen to any one of the three of us, it would currently be a scramble to replace them and could possibly end up with an extended loss of that function, whether broadcast operations, the website, or community outreach and the app.

    So we started looking at this situation, and figuring out who currently owned what, and how we could start fixing the single points of failure:

    • Station operations are on an ancient Linux release
    • We’re running an unsupported and unmaintained version of Airtime. It can’t even properly reset passwords, a major problem in an outage if someone can’t get in.
    • The MacOS/iOS app is handled by one developer; if that person becomes unavailable, the app could end up deleted from the store if it’s not maintained.
    • The website is being managed by one person, and that person becomes unavailable…well, the site will probably be fine until the next time the hosting bill isn’t paid, but if there were any issues, we’d be SOL.
    • We do have documentation, but we don’t have playbooks or process for problem solving.
    • We don’t have anywhere that is a gathering point when there’s a problem.
    • We don’t have project tracking so we can know who’s doing what, who their backup or backups are, and where things are in process.
    • We don’t have an easily-maintained central repository of documentation.

    What we’re doing

    I took point on starting to get this all organized. Fixing all of the things above is going to take time and some sustained effort to accomplish, and we’re going to want to make sure that we have everything properly set up so that we minimize the number of failure points. Having everyone onboard is critical.

    • We’re going to move operations to a newer, faster, and cheaper server running a current LTS Ubuntu.
    • We’re going to upgrade from the old unsupported AirTime to the community-supported LibreTime.
    • We’re figuring out who could get up to speed on MacOS/iOS development and be ready to take over the app if something should happen that I couldn’t continue maintaining it. At the moment, we’re looking at setting up a process to bump the build number, rebuild with the most current Xcode, and re-release every six months or so to keep the app refreshed. Long-term we’ll need a second developer (at least) who can build and release the app, and hopefully maintain it.
    • We haven’t yet discussed what to do about the website; it is already a managed WordPress installation, so it should be possible to add one or more additional maintainers.
    • We are going to need to collect the docs we have somewhere that they can be maintained more easily. This could be in a shared set of Google docs, or a wiki; we’re currently leaning toward a wiki.
    • We need project tracking; there’s no need for a full-up ticketing process, at least yet. We think that Trello should do well enough for us.

    We have set up some new Discord channels to keep this conversation open: #production-incidents, to make tracking any new problems easier, and #the-great-migration, to keep channels open as we move forward in the migration to our new setup.

    Everyone is on board and enthusiastic about getting things in better shape, which is the best one could want. It looks good for RadioSpiral’s future. Admittedly we should have done this before a failure, but we’re getting it in gear, and that’s better than ignoring it!

  • “Projects in Flight”

    First a confession. I tend to have enthusiasms, work hard on them for a while, and then have something else interesting come across my radar, which will then become my new enthusiasm. This tends to lead to a lot of half-completed things, which I then feel bad about and avoid, causing me to not get anything done, making me feel even worse.

    I’ve decided that I’m going to try a different strategy: “projects in flight”. I’m embracing the fact that I have enthusiasms, and lots of them. I contain multitudes. And this is good.

    So instead of feeling bad that I have a dozen projects that aren’t getting anywhere, I’m going to acknowledge that I have a lot of interests, and more of them than I have time to do. So some of them don’t pan out. Some of them get partway through, and then I discover that the problem is better solved a different way, or that the thing I want to do isn’t actually as good as I thought, or whatever. I am allowed to fail.

    Think about it this way: for every Google or Facebook, there are a hundred startups that try to do something, get partway in, and fail. Maybe the idea wasn’t so great. Maybe the resources to do the thing they wanted to do just aren’t feasible, or available, or affordable. Maybe they just can’t get someone to give them the seed money to try.

    All these projects fail. And the entrepreneurs don’t feel bad about themselves if they do. They gave it the shot they could give it, with the effort and resources they had at hand, and it didn’t work out – and they move on to their next project.

    So I’ve decided to embrace the entrepreneurial mindset for my personal projects. I’m keeping a list of everything I’m doing, from the trivial to the complex, and allowing myself to be happy that I am creative and multifaceted; if something doesn’t get done, it stays on the list as something to come back to, unless I decide it’s not worth coming back to…and then it goes into the “idea pool”. Maybe it’ll trigger something else later. Maybe it won’t. It’s fine.

    It hasn’t failed. I haven’t failed. I’ve just discovered something that as I approached it this time, it didn’t succeed. It was my AltaVista, or Ask Jeeves, or Yahoo! Search instead of my Google. Maybe on another look later, with more information, more experience, more time, more energy it will succeed.

    But I don’t have to feel bad about it anymore. I can be proud and happy that I’m trying things and doing things. Yes, I do want to finish things too, but I can stop looking at the unfinished things and thinking that I’m failing because they’re not all done and perfect.

    So: I have a dozen or so projects in flight, at various levels of done, and I’m happy that I have interesting things to do!

  • JSON, Codable, and an illustration of ChatGPT’s shortcomings

    A little context: I’m updating the RadioSpiral app to use the (very nice) Radio Station Pro API that gives me access to useful stuff like the station calendar, the current show, etc. Like any modern API, it returns its data in JSON, so to use this in Swift, I need to write the appropriate Codable structs for it — this essentially means that the datatypes are datatypes that Swift either can natively decode, or that they’re Codable structs.

    I spent some time trying to get the structs right (the API delivers something that makes this rough, see below), and after a few tries that weren’t working, I said, “this is dumb, stupid rote work – obviously a job for ChatGPT.”

    So I told it “I have some JSON, and I need the Codable Swift structs to parse it.” The first pass was pretty good; it gave me the structs it thought were right and some code to parse with – and it didn’t work. The structs looked like they matched: the fields were all there, and the types were right, but the parse just failed.

    keyNotFound(CodingKeys(stringValue: "currentShow", intValue: nil), Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "broadcast", intValue: nil)], debugDescription: "No value associated with key CodingKeys(stringValue: \"currentShow\", intValue: nil) (\"currentShow\").", underlyingError: nil))

    Just so you can be on the same page, here’s how that JSON looks, at least the start of it:

    {
    	"broadcast": {
    		"current_show": {
    			"ID": 30961,
    			"day": "Wednesday",
    			"date": "2023-12-27",
    			"start": "10:00",
    			"end": "12:00",
    			"encore": false,
    			"split": false,
    			"override": false,
    			"id": "11DuWtTE",
    			"show": {...

    I finally figured out that Swift, unlike Go, must have field names that exactly match the keys in the incoming JSON. So if the JSON looks like {broadcast: {current_show... then the struct modeling the contents of the broadcast field had better have a field named current_show, exactly matching the JSON. (Go’s JSON parser uses annotations to map the fields to struct names, so having a field named currentShow is fine, as long as the annotation says its value comes from current_show. That would look something like this:

    type Broadcast struct {
        currentShow  CurrentShow `json:currentShow`
        ...
    }
    
    type CurrentShow struct {
       ... 

    There’s no ambiguity or translation needed, because the code explicitly tells you what field in the struct maps to what field in the JSON. (I suppose you could completely rename everything to arbitrary unrelated names in a Go JSON parse, but from a software engineering POV, that’s just asking for trouble.)

    Fascinatingly, ChatGPT sort of knows what’s wrong, but it can’t use that information to fix the mistake! “I apologize for the oversight. It seems that the actual key in your JSON is “current_show” instead of “currentShow”. Let me provide you with the corrected Swift code:”. It then provides the exact same wrong code again!

    struct Broadcast: Codable {
        let currentShow: BroadcastShow
        let nextShow: BroadcastShow
        let currentPlaylist: Bool
        let nowPlaying: NowPlaying
        let instance: Int
    }

    The right code is

    struct Broadcast: Codable {
        let current_show: BroadcastShow // exact match to the field name
        let next_show: BroadcastShow.   // and so on...
        let current_playlist: Bool
        let now_playing: NowPlaying
        let instance: Int
    }

    When I went through manually and changed all the camel-case names to snake-case, it parsed just fine. (I suppose I could have just asked ChatGPT to make that correction, but after it gets something wrong that it “should” get right, I tend to make the changes myself to be sure I understood it better than the LLM.)

    Yet another illustration that ChatGPT really does not know anything. It’s just spitting out the most likely-looking answer, and a lot of the time it’s close enough. This time it wasn’t.

    On the rough stuff from the API: some fields are either boolean false (“nothing here”) or a struct. Because Swift is a strongly-typed language, this has to be dealt with via an enum and more complex parsing. At the moment, I can get away with failing the parse and using a default value if this happens, but longer-term, the parsing code should use enums for this. If there are multiple fields that do this it may end up being a bit of a combinatorial explosion to try to handle all the cases, but I’ll burn that bridge when I come to it.

  • Archiving papers: a strategy

    I’m helping a friend archive a lot of notebooks and papers that they’ve accumulated over several years of writing. They’d like to be able to travel, but are a little worried that not having any backup for all this work is risky; fires, floods, and theft do happen, so even a fireproof box isn’t a guaranteed backup.

    We’ve therefore been photographing the papers, page by page, and creating a 3-2-1 backup of all of the digital photos. After some experimentation, we’ve come up with a workflow that works very well:

    • Create a Photos library that is not the primary. (She has an art business and needs to be able to use her iCloud-synced Photos library without it getting cluttered up with hundreds of photographs of pages.) This is most easily done by holding down Option and launching Photos. When the “select the library” dialog comes up, create a new one.
    • Photograph the items on a second iCloud account’s primary Photos library. This automatically syncs them to that accounts iCloud Photos.
    • On the machine where the secondary Photos library lives, log into iCloud.com with the second account.
    • On that same machine, open Photos with the non-primary library. (Hold down the option key and open Photos to allow Photos to select the non-primary Photos library.)
    • As batches of photos are taken, wait for them to sync to iCloud, then on the iCloud.com page for the second account, download the batch to the machine where the secondary library lives.
    • Create a new album in that secondary library, and drag the new batch of photos into it.
    • Put a sticker on the notebook/folder, and write in an ID (A, B, C, etc.) and the date it was photographed last. This allows active notebooks to be archived safely. (You should also add a note on the last page scanned with the date and album ID so you can cross-check.)

    Photographing the cover of the notebook/the file folder the pages are in helps make sure that you keep different batches of photos separate. If you do this, it’s much easier to keep track of which pages belong in which album, and gives a better way to track back which things are done and which aren’t.

  • word break: now where have I seen this before?

    I’m now doing some paired leetcode study sessions with a former ZipRecruiter co-worker, Marcellus Pelcher (LinkedIn). As he says, “I’ve been doing these [leetcode exercises] for a long time!” and it’s been great both to get to spend some time with him and to work together on some problems.

    I’m generally doing OK on easy problems so after getting our video call working, we looked at the word break problem. This essentially boils down to a pattern match: given a list of words, and a string of characters, figure out if the string can be broken into pieces that all match a word in the list of words.

    Examples?

    • leetcode and ["leet", "code"] succeeds (by inspection!)
    • catsandog and ["cat", "cats", "sand", "dog"] fails (no combo lets us match the final og).

    We agreed that this was probably meant to be a dynamic programming problem (way to saddle myself with a hard one right out of the gate!). I put together a recursive greedy algorithm (similar to the good old SNOBOL4 pattern matcher) in ten minutes or so, with a certain amount of flailing at accessing the strings, but it took too long to get to a solution, and it was a huge memory hog. Marcellus did a dynamic programming solution in Java that used a bit array to track the matches, and had it done and passing in about 15-20 minutes. So one for him and zero for me! 🙂

    I let the solution rattle around in my head and after taking a shower, it suddenly dawned on me that this was very much like the coin change problem, but instead of reducing the count, we’re trimming off substrings. I went back to work from there and finally realized that the bit array was tracking the ends of successful matches, which made the dynamic programming recurrence go like this:

    • Start at the beginning of the string (position 0) and find all words that match at that position. Record where they end. That’s our starting configuration.
    • Continue moving forward through the string, comparing all the words at the current position.
    • If the word matches at the current position, it successfully extends a match if and only if a previous match ended right before where this one starts. If this is true, then we record the end of the newly-matched word. Otherwise we just continue to the next word.
    • If a match ends at the end of the string, we’ve successfully found some match or another; for this problem, that’s all that matters.

    Let’s do an example. Let’s say our string is “catsupondog” and our words are “cat”, “cats”, “catsup”, “upon”, “on”, and “dog”. At position zero, “cat”, “cats”, and “catsup” all matched, so we’ll record the ending indexes in a bit array. After our startup matches, things look like this:

    catsupondog
    0123456789A
    00110100000
     AA A
     || |
     || +-- catsup matched up to here
     |+------ cats matched up to here
     +-------- cat matched up to here

    As the index moves up through the string, we keep checking all the words against it. If we match another word, we then look to see if the position just before the start of this new match marks the end of a successful match.

    When we get to position 4, “upon” matches, and position 3 is true, so this extends the match, and we mark position 7 as the end of a successful match.

    catsupondog
    0123456789A
    00110101000
       A   A
       |   |
       |   +--- "upon" extends the "cats" match
       +------- "cats" match ended here

    When we get to position 6, “on” matches, and we mark position 7 as end of a match again. (Note that marking it twice is fine; we just have two different successful matches that end there, and for this problem, which one got us to 7 doesn’t matter.)

    catsupondog
    0123456789A
    00110101000
         A A
         | |
         | +--- "on" extends the "catsup" match
         +----- "catsup" match ended here

    When we finally get to position 8, “dog” matches, and position 7 is true, so this extends the match again. We’re at the end, so we’re done, and successful. We don’t have to check any more past position 8.

    catsupondog
    0123456789A
    00110101001
           A  A
           |  |
           |  +--- "dog" matches, reaches the end, and succeeds
           +------ end of "on" and "upon" matches; which one doesn't matter
            

    Enough chatter, here’s some code.

    func wordBreak(s string, wordDict []string) bool {
        
        // Records where matches ended successfully.
        matched := make([]bool, len(s))
    
        for i := 0; i < len(s); i++ {
            // Check every word at the current position.
            for _,word := range(wordDict) {
                if i + len(word)-1 < len(s) {
                    // Word can fit in remaining space
                    if s[i:i+len(word)] == word {
                        // matches at the current index
                        if i == 0 || matched[i-1] {
                            // If we're at zero, we always succeed.
                            // Otherwise, check if a successful match ended right
                            // before this one. If it did, mark the end of this
                            // match as successful too.
                            matched[i+len(word)-1] = true
                            if i+len(word)-1 == len(s)-1 {
                                // Short-circuit if we just successfully
                                // reached the end with this match
                                break
                            }
                        }
                    }
                }
            }
        }
        return matched[len(s)-1]
    }

    Bottom 7% in performance, but I’m giving myself a win because I actually figured out a dynamic programming problem by getting what the recurrence was. Marcellus told me, I just didn’t follow his explanation! [EDIT: I removed a fmt.Println of the bit array inside the loop and now it’s top 80%! Calling that a pass for an interview too.]

  • Making change via dynamic programming: easy when you see it

    I failed to see this is an induction problem, and did not think of an elegant way to represent “we have no solution” for a given amount.

    The problem description is “given a set of coins of n denominations, figure out the minimum number of coins needed to make change for a given amount. If you can’t make change, return -1.”

    So here’s the induction:

    • Assume it’s impossible to make change for any amount up to the target amount by setting the number of coins to > the max amount. (For this problem it was 2**32, which I’ll refer to as “impossible”).
    • For an amount of 0, it takes 0 coins to make change.
    • For each amount after 0, the number of coins needed to make change for this amount is 1 of the current coin, plus however many coins it took to make change for amount - current coin value.
    • If we couldn’t make change for amount - current coin value (value for that is impossible), then try the next coin.
    • If we run out of coins, we can’t make change for that amount, leave it set to impossible, and we move on to the next amount up.
    • When we reach the final amount, we’ve either found a chain of sub-solutions that reach 0 (change is made with n coins, the sum of all the sub-solutions) or it’s impossible.

    Because this builds a table of solutions from the bottom up, we’re always guaranteed that the solution for any amount < the current amount has already been solved, and we always try all the coins for every amount, so we’re guaranteed to find a solution if there is one, even if the coins are not in sorted order.

    I chose to use int64‘s and set the FAILURE amount to an amount specified in the problem description as definitely larger than any amount that is possible. You could do it with a map[int]int, checking for “no entry”, but using FAILURE allows the lookback to always work with just a less-than test, so it’s probably faster. I’ve updated the variable names to make it clearer how this works.

    
    func coinChange(coins []int, amount int) int {
        const FAILURE int64 = 5000000000
    
        // Assume everything is impossible...
        changeFor := make([]int64, amount+1)
        for i := 0; i < len(changeFor); i++ {
            // effectively Inf
            changeFor[i] = FAILURE
        }
    
        // ...except 0. Change for 0 is 0 (it takes no coins to make change
        // for 0.
        changeFor[0] = 0
    
        // For every amount up to the target amount (i.e., solve every
        // subproblem):
        for currentAmount := 1; currentAmount <= amount; currentAmount++ {
    
            // Try every coin we have to solve the problem.
            for _, coin := range(coins) {
    
                // If this coin might work...
                if coin <= currentAmount {
    
                    // Get the answer for the subproblem.
                    lookBack := changeFor[currentAmount - coin]
                    
                     // This if statement is doing a lot of elegant
                     // heavy lifting.
                     //
                     // If the subproblem had no solution, the lookback is
                     // FAILURE, and FAILURE + 1 is greater than the current
                     // value (FAILURE or less). This guarantees that we don't
                     // overwrite any solution that was already found, and that
                     // we leave it as FAILURE for this coin!
                     //
                     // If the subproblem DID have a solution, and it's better
                     // (less) than what we have (some number of coins or
                     // FAILURE), then we change the current solution for this
                     // amount to the new count.
                     //
                     // This is why the order of the coins in the coin array
                     // doesn't matter: we will always try ALL the coins for
                     // a given amount, and if a different coin has a better
                     // (smaller) solution, we'll change the solution for the
                     // current amount for the better one!
                     //
                     // This means we're guaranteed to have the best solution
                     // for every amount < the current amount (including 0),
                     // so composing a solution for the current amount from
                     // previous solutions always gets the best one for this
                     // amount.
                     //
                     // The fact that it takes a comment THIS LONG to accurately
                     // explain TWO LINES OF CODE...!
                    if lookBack + 1 < changeFor[currentAmount] {
                        // Previous number of coins solving this, plus one more
                        // for the current coin solving this amount better than
                        // we previously did. We never get here if we don't find
                        // a solution for any of the coins, leaving this set to
                        // FAILURE.
                        changeFor[currentAmount] = lookBack + 1
                    }
                }
            }
        }
        // If we never found a solution, the caller wants -1.
        if changeFor[amount] == FAILURE {
            return -1
        }
        // Retrieve the solution from the solution cache. If we were
        // going to use this a lot, we'd save the cache.
        return int(changeFor[amount])
    }
    

    I hope this commented code shows how disgustingly elegant this is. I didn’t find this solution, but I feel like working through why it works has helped me get a better grasp on dynamic programming.

    Lessons learned:

    • What can you solve by inspection?
    • How do you mark things as “not solved” or “insoluble”?
    • Don’t be afraid to solve a lot of seemingly unneeded subproblems if you can use them to break down the problem you have.

    I initially tried a greedy solution for this problem, and that is actually optimal if the set of coins you’re given can always make up any number. (E.g., US coins are 1 cent, 5 cents, 10 cents, 25 cents, and 50 cents; you can use those to make any number of cents from 1 to 100.) If the set of coins doesn’t guarantee this, then you have to use the dynamic programming approach to solve it.

    The second, recursive try might have worked if I’d cached solutions found along the way, effectively turning the iterative lop here into a recursive one, but I didn’t and it DNF’ed.

    Right, stats. 79th percentile on runtime, 80th on memory. Pretty good.