We had our first extended outage at RadioSpiral this weekend, and I’m writing about it here to point out how a production incident can help bring a team together not only technically, but as a group.

The timeline

On Sunday evening, about an hour before Tony Gerber’s Sunday show, RadioSpiral went offline. Normally, the AirTime installation handles playing tracks from the station library when there’s no show, and it played a track…and then stopped. Dead air.

The station has been growing; we’ve added two new DJs, doubling the number of folks who are familiar with servers, Linux, etc. Everyone who was available (pretty much everyone but our primary sysadmin, who set everything up and who is in the UK) jumped in to try to see what was up. We were able to log in to AirTime and see that it was offline, but not why; we tried restarting the streaming service, and the server itself, but couldn’t get back online.

We did figure out that we could connect to the master streaming port so that Tony could do his show, but after that, we were off the air for almost 12 hours, until our primary sysadmin was up, awake, and had finished his work day.

A couple hours of investigation on his part did finally determine that LetsEncrypt had added a RewriteRule to the Airtime configuration that forced all URLs to HTTPS; unfortunately it needs HTTP for its internal APIs and that switchover broke it. Removing the rule and restarting the server got us back on line, and our very patient and faithful listeners trickled back in over the day.

Now what?

While we’d not been able to diagnose and fix the problem, we had been chatting in the staff channel on the RadioSpiral Discord server, and considering the larger issues.

RadioSpiral is expected to be up 24/7, but we’re really running it more like a hobby than a business. This is reasonable, because it’s definitely not making any of us money, at least not directly. (Things like sales of albums by our DJs, etc., are their business and not part of the station’s remit.) This means that we can have situations like this one, where the station could be offline for an extended amount of time without recourse.

Secondarily, RadioSpiral is small. We have three folks who are the core of actual station operations, and their contributions are very much siloed. If something should happen to any one of the three of us, it would currently be a scramble to replace them and could possibly end up with an extended loss of that function, whether broadcast operations, the website, or community outreach and the app.

So we started looking at this situation, and figuring out who currently owned what, and how we could start fixing the single points of failure:

  • Station operations are on an ancient Linux release
  • We’re running an unsupported and unmaintained version of Airtime. It can’t even properly reset passwords, a major problem in an outage if someone can’t get in.
  • The MacOS/iOS app is handled by one developer; if that person becomes unavailable, the app could end up deleted from the store if it’s not maintained.
  • The website is being managed by one person, and that person becomes unavailable…well, the site will probably be fine until the next time the hosting bill isn’t paid, but if there were any issues, we’d be SOL.
  • We do have documentation, but we don’t have playbooks or process for problem solving.
  • We don’t have anywhere that is a gathering point when there’s a problem.
  • We don’t have project tracking so we can know who’s doing what, who their backup or backups are, and where things are in process.
  • We don’t have an easily-maintained central repository of documentation.

What we’re doing

I took point on starting to get this all organized. Fixing all of the things above is going to take time and some sustained effort to accomplish, and we’re going to want to make sure that we have everything properly set up so that we minimize the number of failure points. Having everyone onboard is critical.

  • We’re going to move operations to a newer, faster, and cheaper server running a current LTS Ubuntu.
  • We’re going to upgrade from the old unsupported AirTime to the community-supported LibreTime.
  • We’re figuring out who could get up to speed on MacOS/iOS development and be ready to take over the app if something should happen that I couldn’t continue maintaining it. At the moment, we’re looking at setting up a process to bump the build number, rebuild with the most current Xcode, and re-release every six months or so to keep the app refreshed. Long-term we’ll need a second developer (at least) who can build and release the app, and hopefully maintain it.
  • We haven’t yet discussed what to do about the website; it is already a managed WordPress installation, so it should be possible to add one or more additional maintainers.
  • We are going to need to collect the docs we have somewhere that they can be maintained more easily. This could be in a shared set of Google docs, or a wiki; we’re currently leaning toward a wiki.
  • We need project tracking; there’s no need for a full-up ticketing process, at least yet. We think that Trello should do well enough for us.

We have set up some new Discord channels to keep this conversation open: #production-incidents, to make tracking any new problems easier, and #the-great-migration, to keep channels open as we move forward in the migration to our new setup.

Everyone is on board and enthusiastic about getting things in better shape, which is the best one could want. It looks good for RadioSpiral’s future. Admittedly we should have done this before a failure, but we’re getting it in gear, and that’s better than ignoring it!

So it’s been a minute since I did any serious work on WebWebXNG.

Initially, I decided that the easiest way forward was “translate this old CGI code into modern web code”. And up to a point, that was a good way to go. But I got to the point where I was trying to make the rubber meet the road, and the intertwining of templating and code in the old version was making me stall out.

I’ve had a breather, working on other projects, and the world has moved on and brought me some new things. One in particular is htmx.

The htmx library works a whole lot more like the old CGI world did, just better. Everything is capable of interacting with the user, all of the HTTP verbs are available, and interaction is by exchanging chunks of HTML. You don’t convert to JSON, then convert back to HTML. This kind of logic definitely fits better with the concept of WebWebX as-it-was.

Also, Perl project management has definitely changed — and improved. I did like Dist::Zilla, but it’s definitely a heavyweight solution. In the meantime, Minilla has appeared, and it fits incredibly well into the model I want to use to manage the code:

  • Your module is Pure Perl, and files are stored in lib.
  • Your executable file is in script directory, if there is one.
  • Your dist sharedirs are in share, if you have any.
  • Your module is maintained with Git and git ls-files matches with what you will release.
  • Your module has a static list of prerequisites that can be described in a cpanfile.
  • Your module has a Changes file.
  • You want to install via cpanm.

I do have a working page storage engine, which is good, but the interaction engine is definitely nowhere. I’m coming back to the project with fresh eyes, and I’m going to redesign it top-to-bottom to use htmx for all the backend interaction.

Looking forward to this, and the next iteration of WebWebXNG starts now.

First a confession. I tend to have enthusiasms, work hard on them for a while, and then have something else interesting come across my radar, which will then become my new enthusiasm. This tends to lead to a lot of half-completed things, which I then feel bad about and avoid, causing me to not get anything done, making me feel even worse.

I’ve decided that I’m going to try a different strategy: “projects in flight”. I’m embracing the fact that I have enthusiasms, and lots of them. I contain multitudes. And this is good.

So instead of feeling bad that I have a dozen projects that aren’t getting anywhere, I’m going to acknowledge that I have a lot of interests, and more of them than I have time to do. So some of them don’t pan out. Some of them get partway through, and then I discover that the problem is better solved a different way, or that the thing I want to do isn’t actually as good as I thought, or whatever. I am allowed to fail.

Think about it this way: for every Google or Facebook, there are a hundred startups that try to do something, get partway in, and fail. Maybe the idea wasn’t so great. Maybe the resources to do the thing they wanted to do just aren’t feasible, or available, or affordable. Maybe they just can’t get someone to give them the seed money to try.

All these projects fail. And the entrepreneurs don’t feel bad about themselves if they do. They gave it the shot they could give it, with the effort and resources they had at hand, and it didn’t work out – and they move on to their next project.

So I’ve decided to embrace the entrepreneurial mindset for my personal projects. I’m keeping a list of everything I’m doing, from the trivial to the complex, and allowing myself to be happy that I am creative and multifaceted; if something doesn’t get done, it stays on the list as something to come back to, unless I decide it’s not worth coming back to…and then it goes into the “idea pool”. Maybe it’ll trigger something else later. Maybe it won’t. It’s fine.

It hasn’t failed. I haven’t failed. I’ve just discovered something that as I approached it this time, it didn’t succeed. It was my AltaVista, or Ask Jeeves, or Yahoo! Search instead of my Google. Maybe on another look later, with more information, more experience, more time, more energy it will succeed.

But I don’t have to feel bad about it anymore. I can be proud and happy that I’m trying things and doing things. Yes, I do want to finish things too, but I can stop looking at the unfinished things and thinking that I’m failing because they’re not all done and perfect.

So: I have a dozen or so projects in flight, at various levels of done, and I’m happy that I have interesting things to do!

A little context: I’m updating the RadioSpiral app to use the (very nice) Radio Station Pro API that gives me access to useful stuff like the station calendar, the current show, etc. Like any modern API, it returns its data in JSON, so to use this in Swift, I need to write the appropriate Codable structs for it — this essentially means that the datatypes are datatypes that Swift either can natively decode, or that they’re Codable structs.

I spent some time trying to get the structs right (the API delivers something that makes this rough, see below), and after a few tries that weren’t working, I said, “this is dumb, stupid rote work – obviously a job for ChatGPT.”

So I told it “I have some JSON, and I need the Codable Swift structs to parse it.” The first pass was pretty good; it gave me the structs it thought were right and some code to parse with – and it didn’t work. The structs looked like they matched: the fields were all there, and the types were right, but the parse just failed.

keyNotFound(CodingKeys(stringValue: "currentShow", intValue: nil), Swift.DecodingError.Context(codingPath: [CodingKeys(stringValue: "broadcast", intValue: nil)], debugDescription: "No value associated with key CodingKeys(stringValue: \"currentShow\", intValue: nil) (\"currentShow\").", underlyingError: nil))

Just so you can be on the same page, here’s how that JSON looks, at least the start of it:

{
	"broadcast": {
		"current_show": {
			"ID": 30961,
			"day": "Wednesday",
			"date": "2023-12-27",
			"start": "10:00",
			"end": "12:00",
			"encore": false,
			"split": false,
			"override": false,
			"id": "11DuWtTE",
			"show": {...

I finally figured out that Swift, unlike Go, must have field names that exactly match the keys in the incoming JSON. So if the JSON looks like {broadcast: {current_show... then the struct modeling the contents of the broadcast field had better have a field named current_show, exactly matching the JSON. (Go’s JSON parser uses annotations to map the fields to struct names, so having a field named currentShow is fine, as long as the annotation says its value comes from current_show. That would look something like this:

type Broadcast struct {
    currentShow  CurrentShow `json:currentShow`
    ...
}

type CurrentShow struct {
   ... 

There’s no ambiguity or translation needed, because the code explicitly tells you what field in the struct maps to what field in the JSON. (I suppose you could completely rename everything to arbitrary unrelated names in a Go JSON parse, but from a software engineering POV, that’s just asking for trouble.)

Fascinatingly, ChatGPT sort of knows what’s wrong, but it can’t use that information to fix the mistake! “I apologize for the oversight. It seems that the actual key in your JSON is “current_show” instead of “currentShow”. Let me provide you with the corrected Swift code:”. It then provides the exact same wrong code again!

struct Broadcast: Codable {
    let currentShow: BroadcastShow
    let nextShow: BroadcastShow
    let currentPlaylist: Bool
    let nowPlaying: NowPlaying
    let instance: Int
}

The right code is

struct Broadcast: Codable {
    let current_show: BroadcastShow // exact match to the field name
    let next_show: BroadcastShow.   // and so on...
    let current_playlist: Bool
    let now_playing: NowPlaying
    let instance: Int
}

When I went through manually and changed all the camel-case names to snake-case, it parsed just fine. (I suppose I could have just asked ChatGPT to make that correction, but after it gets something wrong that it “should” get right, I tend to make the changes myself to be sure I understood it better than the LLM.)

Yet another illustration that ChatGPT really does not know anything. It’s just spitting out the most likely-looking answer, and a lot of the time it’s close enough. This time it wasn’t.

On the rough stuff from the API: some fields are either boolean false (“nothing here”) or a struct. Because Swift is a strongly-typed language, this has to be dealt with via an enum and more complex parsing. At the moment, I can get away with failing the parse and using a default value if this happens, but longer-term, the parsing code should use enums for this. If there are multiple fields that do this it may end up being a bit of a combinatorial explosion to try to handle all the cases, but I’ll burn that bridge when I come to it.

I’m helping a friend archive a lot of notebooks and papers that they’ve accumulated over several years of writing. They’d like to be able to travel, but are a little worried that not having any backup for all this work is risky; fires, floods, and theft do happen, so even a fireproof box isn’t a guaranteed backup.

We’ve therefore been photographing the papers, page by page, and creating a 3-2-1 backup of all of the digital photos. After some experimentation, we’ve come up with a workflow that works very well:

  • Create a Photos library that is not the primary. (She has an art business and needs to be able to use her iCloud-synced Photos library without it getting cluttered up with hundreds of photographs of pages.) This is most easily done by holding down Option and launching Photos. When the “select the library” dialog comes up, create a new one.
  • Photograph the items on a second iCloud account’s primary Photos library. This automatically syncs them to that accounts iCloud Photos.
  • On the machine where the secondary Photos library lives, log into iCloud.com with the second account.
  • On that same machine, open Photos with the non-primary library. (Hold down the option key and open Photos to allow Photos to select the non-primary Photos library.)
  • As batches of photos are taken, wait for them to sync to iCloud, then on the iCloud.com page for the second account, download the batch to the machine where the secondary library lives.
  • Create a new album in that secondary library, and drag the new batch of photos into it.
  • Put a sticker on the notebook/folder, and write in an ID (A, B, C, etc.) and the date it was photographed last. This allows active notebooks to be archived safely. (You should also add a note on the last page scanned with the date and album ID so you can cross-check.)

Photographing the cover of the notebook/the file folder the pages are in helps make sure that you keep different batches of photos separate. If you do this, it’s much easier to keep track of which pages belong in which album, and gives a better way to track back which things are done and which aren’t.

I’m now doing some paired leetcode study sessions with a former ZipRecruiter co-worker, Marcellus Pelcher (LinkedIn). As he says, “I’ve been doing these [leetcode exercises] for a long time!” and it’s been great both to get to spend some time with him and to work together on some problems.

I’m generally doing OK on easy problems so after getting our video call working, we looked at the word break problem. This essentially boils down to a pattern match: given a list of words, and a string of characters, figure out if the string can be broken into pieces that all match a word in the list of words.

Examples?

  • leetcode and ["leet", "code"] succeeds (by inspection!)
  • catsandog and ["cat", "cats", "sand", "dog"] fails (no combo lets us match the final og).

We agreed that this was probably meant to be a dynamic programming problem (way to saddle myself with a hard one right out of the gate!). I put together a recursive greedy algorithm (similar to the good old SNOBOL4 pattern matcher) in ten minutes or so, with a certain amount of flailing at accessing the strings, but it took too long to get to a solution, and it was a huge memory hog. Marcellus did a dynamic programming solution in Java that used a bit array to track the matches, and had it done and passing in about 15-20 minutes. So one for him and zero for me! πŸ™‚

I let the solution rattle around in my head and after taking a shower, it suddenly dawned on me that this was very much like the coin change problem, but instead of reducing the count, we’re trimming off substrings. I went back to work from there and finally realized that the bit array was tracking the ends of successful matches, which made the dynamic programming recurrence go like this:

  • Start at the beginning of the string (position 0) and find all words that match at that position. Record where they end. That’s our starting configuration.
  • Continue moving forward through the string, comparing all the words at the current position.
  • If the word matches at the current position, it successfully extends a match if and only if a previous match ended right before where this one starts. If this is true, then we record the end of the newly-matched word. Otherwise we just continue to the next word.
  • If a match ends at the end of the string, we’ve successfully found some match or another; for this problem, that’s all that matters.

Let’s do an example. Let’s say our string is “catsupondog” and our words are “cat”, “cats”, “catsup”, “upon”, “on”, and “dog”. At position zero, “cat”, “cats”, and “catsup” all matched, so we’ll record the ending indexes in a bit array. After our startup matches, things look like this:

catsupondog
0123456789A
00110100000
 AA A
 || |
 || +-- catsup matched up to here
 |+------ cats matched up to here
 +-------- cat matched up to here

As the index moves up through the string, we keep checking all the words against it. If we match another word, we then look to see if the position just before the start of this new match marks the end of a successful match.

When we get to position 4, “upon” matches, and position 3 is true, so this extends the match, and we mark position 7 as the end of a successful match.

catsupondog
0123456789A
00110101000
   A   A
   |   |
   |   +--- "upon" extends the "cats" match
   +------- "cats" match ended here

When we get to position 6, “on” matches, and we mark position 7 as end of a match again. (Note that marking it twice is fine; we just have two different successful matches that end there, and for this problem, which one got us to 7 doesn’t matter.)

catsupondog
0123456789A
00110101000
     A A
     | |
     | +--- "on" extends the "catsup" match
     +----- "catsup" match ended here

When we finally get to position 8, “dog” matches, and position 7 is true, so this extends the match again. We’re at the end, so we’re done, and successful. We don’t have to check any more past position 8.

catsupondog
0123456789A
00110101001
       A  A
       |  |
       |  +--- "dog" matches, reaches the end, and succeeds
       +------ end of "on" and "upon" matches; which one doesn't matter
        

Enough chatter, here’s some code.

func wordBreak(s string, wordDict []string) bool {
    
    // Records where matches ended successfully.
    matched := make([]bool, len(s))

    for i := 0; i < len(s); i++ {
        // Check every word at the current position.
        for _,word := range(wordDict) {
            if i + len(word)-1 < len(s) {
                // Word can fit in remaining space
                if s[i:i+len(word)] == word {
                    // matches at the current index
                    if i == 0 || matched[i-1] {
                        // If we're at zero, we always succeed.
                        // Otherwise, check if a successful match ended right
                        // before this one. If it did, mark the end of this
                        // match as successful too.
                        matched[i+len(word)-1] = true
                        if i+len(word)-1 == len(s)-1 {
                            // Short-circuit if we just successfully
                            // reached the end with this match
                            break
                        }
                    }
                }
            }
        }
    }
    return matched[len(s)-1]
}

Bottom 7% in performance, but I’m giving myself a win because I actually figured out a dynamic programming problem by getting what the recurrence was. Marcellus told me, I just didn’t follow his explanation! [EDIT: I removed a fmt.Println of the bit array inside the loop and now it’s top 80%! Calling that a pass for an interview too.]

I failed to see this is an induction problem, and did not think of an elegant way to represent “we have no solution” for a given amount.

The problem description is “given a set of coins of n denominations, figure out the minimum number of coins needed to make change for a given amount. If you can’t make change, return -1.”

So here’s the induction:

  • Assume it’s impossible to make change for any amount up to the target amount by setting the number of coins to > the max amount. (For this problem it was 2**32, which I’ll refer to as “impossible”).
  • For an amount of 0, it takes 0 coins to make change.
  • For each amount after 0, the number of coins needed to make change for this amount is 1 of the current coin, plus however many coins it took to make change for amount - current coin value.
  • If we couldn’t make change for amount - current coin value (value for that is impossible), then try the next coin.
  • If we run out of coins, we can’t make change for that amount, leave it set to impossible, and we move on to the next amount up.
  • When we reach the final amount, we’ve either found a chain of sub-solutions that reach 0 (change is made with n coins, the sum of all the sub-solutions) or it’s impossible.

Because this builds a table of solutions from the bottom up, we’re always guaranteed that the solution for any amount < the current amount has already been solved, and we always try all the coins for every amount, so we’re guaranteed to find a solution if there is one, even if the coins are not in sorted order.

I chose to use int64‘s and set the FAILURE amount to an amount specified in the problem description as definitely larger than any amount that is possible. You could do it with a map[int]int, checking for “no entry”, but using FAILURE allows the lookback to always work with just a less-than test, so it’s probably faster. I’ve updated the variable names to make it clearer how this works.


func coinChange(coins []int, amount int) int {
    const FAILURE int64 = 5000000000

    // Assume everything is impossible...
    changeFor := make([]int64, amount+1)
    for i := 0; i < len(changeFor); i++ {
        // effectively Inf
        changeFor[i] = FAILURE
    }

    // ...except 0. Change for 0 is 0 (it takes no coins to make change
    // for 0.
    changeFor[0] = 0

    // For every amount up to the target amount (i.e., solve every
    // subproblem):
    for currentAmount := 1; currentAmount <= amount; currentAmount++ {

        // Try every coin we have to solve the problem.
        for _, coin := range(coins) {

            // If this coin might work...
            if coin <= currentAmount {

                // Get the answer for the subproblem.
                lookBack := changeFor[currentAmount - coin]
                
                 // This if statement is doing a lot of elegant
                 // heavy lifting.
                 //
                 // If the subproblem had no solution, the lookback is
                 // FAILURE, and FAILURE + 1 is greater than the current
                 // value (FAILURE or less). This guarantees that we don't
                 // overwrite any solution that was already found, and that
                 // we leave it as FAILURE for this coin!
                 //
                 // If the subproblem DID have a solution, and it's better
                 // (less) than what we have (some number of coins or
                 // FAILURE), then we change the current solution for this
                 // amount to the new count.
                 //
                 // This is why the order of the coins in the coin array
                 // doesn't matter: we will always try ALL the coins for
                 // a given amount, and if a different coin has a better
                 // (smaller) solution, we'll change the solution for the
                 // current amount for the better one!
                 //
                 // This means we're guaranteed to have the best solution
                 // for every amount < the current amount (including 0),
                 // so composing a solution for the current amount from
                 // previous solutions always gets the best one for this
                 // amount.
                 //
                 // The fact that it takes a comment THIS LONG to accurately
                 // explain TWO LINES OF CODE...!
                if lookBack + 1 < changeFor[currentAmount] {
                    // Previous number of coins solving this, plus one more
                    // for the current coin solving this amount better than
                    // we previously did. We never get here if we don't find
                    // a solution for any of the coins, leaving this set to
                    // FAILURE.
                    changeFor[currentAmount] = lookBack + 1
                }
            }
        }
    }
    // If we never found a solution, the caller wants -1.
    if changeFor[amount] == FAILURE {
        return -1
    }
    // Retrieve the solution from the solution cache. If we were
    // going to use this a lot, we'd save the cache.
    return int(changeFor[amount])
}

I hope this commented code shows how disgustingly elegant this is. I didn’t find this solution, but I feel like working through why it works has helped me get a better grasp on dynamic programming.

Lessons learned:

  • What can you solve by inspection?
  • How do you mark things as “not solved” or “insoluble”?
  • Don’t be afraid to solve a lot of seemingly unneeded subproblems if you can use them to break down the problem you have.

I initially tried a greedy solution for this problem, and that is actually optimal if the set of coins you’re given can always make up any number. (E.g., US coins are 1 cent, 5 cents, 10 cents, 25 cents, and 50 cents; you can use those to make any number of cents from 1 to 100.) If the set of coins doesn’t guarantee this, then you have to use the dynamic programming approach to solve it.

The second, recursive try might have worked if I’d cached solutions found along the way, effectively turning the iterative lop here into a recursive one, but I didn’t and it DNF’ed.

Right, stats. 79th percentile on runtime, 80th on memory. Pretty good.

Summary: I am going to have to do a lot more practice, and brush up on some things, especially dynamic programming, if I’m going to do well on medium to hard questions.

The good:

  • The practice in the past few weeks has got the idiosyncrasies of Go slices and arrays mostly under my belt. Stuff I was struggling with in the first few exercises is now pretty reflexive. I’m reasonably certain that I can manipulate slices as I need to.
  • I really do remember pointer algorithms from my systems programming days. I was, in some cases, finding it easier to implement algorithms using linked lists for variable linear storage than slices. (Slices have improved a lot though, so I’m not reaching for linked lists first.)
  • I’m doing okay with recursion now.
  • Tree algorithms are not as bad. The medium height-order problem was easy for me to conceptualize; I just had trouble getting the slice of slices to work.
  • If the code can use a queue-based algorithm, I have no problem writing it. 01 matrix was a breeze once I committed to trying the queue, and the same kind of solution worked well for the “clone a graph” problem. I can also see how I could have used it for flood fill.
  • I knocked out the k points closest to the origin problem well under the time limit with a very fast solution; it was pure “how do I fill and then sort a custom data structure?”.
  • sort.Ints, sort.Slice, and sort.Search are now my very good friends and help immensely.

The bad:

  • I am pretty bad at numerical algorithms, like 2sum/3sum, or the inserted interval. I am not intuiting the algorithms at all well.
  • I am still not properly catching myself falling down rabbit holes or not being on the right track. I need some heuristics. Maybe “if the if statement are more than 2 deep, you may be off course”.
  • If I get a dynamic programming problem, I’m sunk. I might be able to code something if it ends up having a Fibonacci relation, but anything more complex is currently beyond me. The DP solution to the length of the longest substring without repeats is just completely black magic. It works, but I cannot see how it possibly could.

leetcode feels like programming’s highs and lows all condensed into a single highly compressed experience: sometimes it’s absolutely crystal clear, and you just need to get it all typed in (all of the copying of the graph into an adjacency array and recloning the nodes worked the first time; the only thing that didn’t was the link builder, which I had left one line out of).

So sometimes I’m enjoying it, and sometimes I am very much not. I sorely miss a real debugger, but I’m getting along with fmt.Println. I would give a lot to be able to build things the way I prefer, bottom up with tests, but leetcode not gonna work that way.

Take an array of strings in RPN and evaluate it.

This is all about managing the stack. I started trying to code a Stack class, but dumped that for simply using append and slicing to manage the stack. Performance was okay: 73% runtime, 41% memory; probably good enough for a pass.

func evalRPN(tokens []string) int {
    stack := []int{}
    for _,token := range(tokens) {
        switch token {
            case "+":
            op2 := stack[len(stack)-1]
            op1 := stack[len(stack)-2]
            stack = stack[:len(stack)-1]
            stack[len(stack)-1] = op1 + op2

            case "-":
            op2 := stack[len(stack)-1]
            op1 := stack[len(stack)-2]
            stack = stack[:len(stack)-1]
            stack[len(stack)-1] = op1 - op2

            case "/":
            op2 := stack[len(stack)-1]
            op1 := stack[len(stack)-2]
            stack = stack[:len(stack)-1]
            stack[len(stack)-1] = op1 / op2
    
            case "*":
            op2 := stack[len(stack)-1]
            op1 := stack[len(stack)-2]
            stack = stack[:len(stack)-1]
            stack[len(stack)-1] = op1 * op2

            default:
              i, err := strconv.Atoi(token)
              if err != nil {
                  panic(fmt.Sprintf("unparseable token %s", token))
              }
              stack = append(stack, i)
        }
    }
    return stack[len(stack)-1]
}

My guess is that if I used a real stack pointer to manage the stack, it’d be faster.

func evalRPN(tokens []string) int {
    stack := make([]int, 10000)
    sp := -1
    for _,token := range(tokens) {
        switch token {
            case "+":
            op2 := stack[sp]
            op1 := stack[sp-1]
            sp--
            stack[sp] = op1 + op2

            case "-":
            op2 := stack[sp]
            op1 := stack[sp-1]
            sp--
            stack[sp] = op1 - op2

            case "/":
            op2 := stack[sp]
            op1 := stack[sp-1]
            sp--
            stack[sp] = op1 / op2

            case "*":
            op2 := stack[sp]
            op1 := stack[sp-1]
            sp--
            stack[sp] = op1 * op2

            default:
              i, err := strconv.Atoi(token)
              if err != nil {
                  panic(fmt.Sprintf("unparseable token %s", token))
              }
              sp++
              stack[sp] = i
        }
    }
    return stack[sp]
}

And no, it is not! Exact same runtime percentile, and used a little less memory (35th percentile).

Couple of quite elegant idea in the solutions; I really liked this one:

func evalRPN(tokens []string) int {
    stack := make([]int, 10000)
    sp := -1
    ops := map[string]func(int,int) int{
        "+": func(i, j int) int { return i + j},
        "-": func(i, j int) int { return i - j},
        "/": func(i, j int) int { return i / j},
        "*": func(i, j int) int { return i * j},
    }
    for _,token := range(tokens) {
        i, err := strconv.Atoi(token)
        if err != nil {
            // operator
            stack[sp-1] = ops[token](stack[sp-1], stack[sp])
            sp--
        } else {
            sp++
            stack[sp] = i
        }
    }
    return stack[sp]
}

We get rid of the switch altogether using the fact that the tokens are either valid numbers or operators, and we make the operators into anonymous functions in a map. If the token fails to convert to a number, then we look it up and call the appropriate function. This is far more fragile though, since any bad token will result in a “no such key” error on the ops map…and we only gain 3% in the runtime. I’ll settle for the first one as good enough for a pass.

Given a graph, clone it and return it with all links intact.

I chose to use a caching crawl of the graph, and then reconstruct the new graph from the cache.

/**
 * Definition for a Node.
 * type Node struct {
 *     Val int
 *     Neighbors []*Node
 * }
 */


func cloneGraph(node *Node) *Node {
    // need to crawl the graph and make sure we've visited all
    // the nodes, saving the connections at each node. Once we
    // have all the nodes and connections, we can rebuild the
    // graph.

    // start at the first node. add it to the queue.
    // while the queue is not empty:
    //  - pull the next item
    //  - if this node is not in the map, add it, and add all the
    //    values from the nodes it links to as the adjacency list.
    //  - for each node in the adjacency list:
    //      if the node is not in the map, add it to the queue
    //         to be explored
    if node == nil {
        return nil
    }

    rootNodeVal := node.Val  // will be used to find and return new root

    queue := []*Node{node}
    // adjacency list for scanned nodes:
    //   key is node pointer
    //   value is list of adjacent nodes
    adj := map[*Node][]*Node{}

    for len(queue) != 0 {
        item := queue[0]
        if len(queue) == 1 {
            queue = []*Node{} // empty it
        } else {
            queue = queue[1:]
        }
        for _,neighbor := range(item.Neighbors) {
            if _, ok := adj[neighbor]; !ok {
                // not visited, add to queue
                queue = append(queue, neighbor)
            }
        }
        adj[item] = item.Neighbors
    }

    // all nodes crawled. each entry in the map is a node address ->
    // list of neighbors.
    
    type task struct {
        from int
        to int
    }

    valToNode := map[int]*Node{}
    linkTasks := []task{}
    for nodeptr, neighbors := range(adj) {
        // we have a pointer to the old node, and all the things it points to.
        // - allocate a new node and record it.
        n := &Node{
            Val: nodeptr.Val,
            Neighbors: []*Node{},
        }
        valToNode[nodeptr.Val] = n
        for _,linkedNode := range(neighbors) {
            if target, ok := valToNode[linkedNode.Val]; ok {
                // Node has been reallocated, link it
                n.Neighbors = append(n.Neighbors, target)
            } else {
                // not allocated yet, add link task for it
                linkTasks = append(linkTasks, task{from:n.Val, to:linkedNode.Val})
            }
        }
    }
    fmt.Println(valToNode)
    // At this point we have allocated all the nodes, but have links
    // left to handle in the queue. for each item in the queue:
    //    - look up the from and to nodes in the map
    //    - append a link to the 'to' node to the 'from' node's link list.
    // when the queue is empty, the links have been built, and we're done.
    for _,task := range(linkTasks) {
        fmt.Println(task.from, task.to)
        fmt.Println(valToNode[task.from])
        fmt.Println(valToNode[task.to])
        fromNode := valToNode[task.from]
        toNode := valToNode[task.to]
        fromNode.Neighbors = append(fromNode.Neighbors, toNode)
    }
    // return the root node.
    return valToNode[rootNodeVal]
}

Works, but slow: 32nd percentile runtime, 8th percentile memory. So there must be some tricks here. (Betting on arrays instead of maps.) Yes, I’ve overengineered it: there are 100 nodes at max, and the node.Val willl be 1-100. SO I can allocate an array instead of a map for my valToNode map. Let’s try that first and see how much speed we get…absolutely none! Exactly the same performance!

Well. What else can I do to speed this up? All right, let’s make the initial adjacency computation just store the integer node values. Since these are unique per node, and we have 100 or less, we can make that [][]int and see what happens.

Had a little trouble that I tracked down to the queue drop getting lost in the adjacency computation. Once that was back in, it worked fine, and now is perfectly acceptable: 78th percentile CPU. Memory still sucky, but I’ll take it.

« Older entries