Visit the nodes of a binary tree in level order: all nodes at the root level, left-to-right, all nodes at level 1, left-to-right, etc.

I did try to solve this with slices and boy did I have trouble. I eventually stopped trying to use them and went to a list-oriented solution with a double closure.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

type MyList struct {
    head *MyNode
    tail *MyNode
}

type MyNode struct {
    Val int
    Next *MyNode
}

func levelOrder(root *TreeNode) [][]int {
    // Ideally, I want to end up with an array
    // of arrays of nodes, added in left-to-right order.
    getStash, helper := createClosures()
    helper(root, 0)
    stash := getStash()

    out := [][]int{}
    for i := 0; i < len(stash); i++ {
        out = append(out, []int{})
        thisList := stash[i]
        for scan := thisList.head; scan != nil; scan = scan.Next {
            out[i] = append(out[i], scan.Val)
        }
    }
    return out
}

func createClosures() (func() []MyList, func(*TreeNode, int)) {
    stash := []MyList{}

    var helper func(*TreeNode, int)
    helper = func(root *TreeNode, level int) {
            if root == nil {
                // Nothing to do at this level
                return
            }
            // Current node gets stashed at the end of the list for this level.
            // (*output)[level] is the slice to append to.
            // Add new node to list at this level
            if len(stash) <= level {
                stash = append(stash, MyList{})
            }
            
            n := &MyNode{Val: root.Val}
            if stash[level].head == nil {
                stash[level].head = n
                stash[level].tail = n
            } else {
                stash[level].tail.Next = n
                stash[level].tail = n
            }
 
            // add the left and right subtrees at the next level down
            helper(root.Left, level + 1)
            helper(root.Right, level + 1)
        }
    return func() []MyList { return stash }, helper
}

This solves the problem, but it’s poor on memory (8th percentile) and runtime (26%). On the other hand, the list manipulation worked correctly the first time. Let’s try to speed it up. First let’s try to remove the lists.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func levelOrder(root *TreeNode) [][]int {
    // Ideally, I want to end up with an array
    // of arrays of nodes, added in left-to-right order.
    getStash, helper := createClosures()
    helper(root, 0)
    return getStash()
}

func createClosures() (func() [][]int, func(*TreeNode, int)) {
    // This will be our output structure. As we go down the
    // levels, we add more []int slices to hold the next level
    // of nodes. Because we work left to right, we always add
    // new nodes at a level to the correct slice, and they always
    // are added to the right end of the slice, giving us the
    // output data structure we want.
    stash := [][]int{}

    // We have to declare a named variable to be closed over for
    // a Go anonymous function to be able to call itself.
    var helper func(*TreeNode, int)

    // The real meat of the process.
    helper = func(root *TreeNode, level int) {
            if root == nil {
                // Nothing to do at this level
                return
            }
            // Current node gets stashed at the end of the slice
            //  for this level.
            // stash[level] is the slice to append to.
            if len(stash) <= level {
                // We have never accessed this level. Add a slice
                // so appends will work. You CANNOT just assign to
                // stash[level], because it doesn't exist yet and
                // you'll get an out of bounds error.
                stash = append(stash, []int{})
            }
            // Add this node's value to the end of the list at this
            // level.
            stash[level] = append(stash[level], root.Val)

            // add the left and right subtrees at the next level down.
            // Because we're tracking the level, we always append to the
            // right slice in the stash, which lives outside the recursion.
            helper(root.Left, level + 1)
            helper(root.Right, level + 1)
        }
    // The two functions the main function needs to call. Stash will be
    // exactly the needed data structure when helper finishes.
    return func() [][]int { return stash }, helper
}

That’s a significant improvement; 71st percentile runtime, 24th percentile memory. The problem I had the first time around was not properly managing the slices. To add an element to a slice after the current end, you must use append(), and I needed to do this for each level as I got there. The code to build the output structure from the lists was the learning experience I needed to get that part right in the rewrite.

Still definitely more comfortable manipulating data structures. Let’s try removing the closure and see how that goes; we’ll just let scoping make the stash visible to the helper.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func levelOrder(root *TreeNode) [][]int {
    // Ideally, I want to end up with an array
    // of arrays of nodes, added in left-to-right order.
    stash := [][]int{}

var helper func(*TreeNode, int)
    helper = func(root *TreeNode, level int) {
            if root == nil {
                // Nothing to do at this level
                return
            }
            // Current node gets stashed at the end of the list for this level.
            // stash[level] is the slice to append to.
            // Add new node to list at this level
            if len(stash) <= level {
                stash = append(stash, []int{})
            }
            stash[level] = append(stash[level], root.Val)

            // add the left and right subtrees at the next level down
            helper(root.Left, level + 1)
            helper(root.Right, level + 1)
        }

    helper(root, 0)
    return stash
}

Interestingly, this is actually worse according to leetcode; 68th percentile now (3ms instead of 1ms). Honestly I think we’re in the noise at this point. No change in the memory usage.

I think if I’d gotten this in an interview I’d call the second or third solution good, and the first marginal. The rewrite time from the list version to the working slice version was only a couple minutes, and I never felt like I didn’t grasp the necessities of the problem.

3 sum: given a list of numbers, find all distinct triples that sum to 0. The hint was to transform this to a 2 sum problem, and look for the 2 sum that solves the problem for the current index.

My initial try spent a lot of effort trying to track the used values in maps, and I did not learn my lesson: when the solution feels too hard, you’re probably doing it wrong.

After I ran out of time, I started looking at solutions; I had started trying to sort the answers and see if there were duplicates, and it sort of worked, but failed some test cases in a non-obvious way.

The real attack for this kind of problem starts out by sorting the numbers first. You can then start depending on math identities to help winnow the solutions, eliminate duplicates, and shrink the search space all at once.

Here’s the solution that was clearest, with my comments and variable name adjustments to make the code clearer yet.

func threeSum(nums []int) [][]int {
    // Output slice.
    out := [][]int{}
    // Sort the numbers ascending first.
    sort.Ints(nums)

    var left, right, sum int

    // We're going to search for three terms. We're going to just move
    // left to right for the first one.
    for first, target := range nums {
        // Skip duplicate terms at the start. We always look at nums[0]
        // and start deduping after that.
        if i > 0 && a == nums[i-1] {
            // Same as previous so any solutions using this would be dups.
            // Skip to the next item.
            continue
        }
        // Start with second item just after the first and third at the end.
        second, third = first + 1, len(nums) - 1
        // As long as the second pointer hasn't hit the third, keep looking.
        for second < third {
            // calculate the sum
            sum = target + nums[second] + nums[third]
            if sum == 0 {
                // This is a solution, save it.
                out = append(out, []int{target, nums[second], nums[third]})
                // Skip duplicates of the second term. We increment once to
                // move to the item after the one we're dedeuping.
                second++
                for second < third && nums[second] == nums[second-1] {
                    second++
                }
            } else if sum < 0 {
                // second number is too small to work; third is the largest
                // as-yet-unused value, so the second has to be larger to
                // move the sum closer to zero.
                second++
            } else { // sum is > 0
                // third number is too large to work. The second number is
                // already the smallest possible for the next solution, so
                // we have to reduce the third to move closer to zero.
                third--
            }
        }
    }
    return out
}

This is good clever as opposed to bad clever, as it’s clear how this actually works. Notice how we keep shrinking the search space as we move the first pointer upward, and how the duplicate elimination at the first and second pointers speeds this up even further. We’re still O(N^2), but we’re discarding as much possible duplicate work as we can.

This is the kind of problem that I find quite difficult; I am not very friendly with numbers, despite a lot of programming over the years. I have spent more time chasing control block chains and securing inputs than I have shuffling numbers, and it shows: I have a lot less trouble with linked lists than these numerical problems.

Not feeling great about my chances to pass one of these live at the moment. Maybe it’s just “more practice needed”. I hope it is, anyway.

First solution worked, but was slow and used a lot of memory (5th percentile for both):

func lengthOfLongestSubstring(s string) int {
    if s == "" {
        return 0
    }
    
    // This looks like a sliding window.
    longest := []byte{}
    current := []byte{}
    m := make(map[byte]int)
    bs := []byte(s)
    
    i := 0
    scan := 0

    for {
        // scan from current point.
        m[bs[scan]]++
        if m[bs[scan]] < 2 {
            // still no repeat, now add it
            current = append(current, bs[scan])
            scan++
            if scan == len(bs) {
                // no more chars to add, current finished
                // and string finished
                break
            }
        } else {
            // repeat; restarting scan at next point
            // for a new possible substring, see if 
            // the new substring is the new longest.
            if len(current) > len(longest) {
                longest = current
            }
            // in either case, we have to try for a new
            // non-repeater in the next window
            i++
            if i == len(bs) {
                // no new window left, finish up
                break
            }
            scan = i
            current = []byte{}
            m = make(map[byte]int)
        }
    }
    if len(current) > len(longest) {
        longest = current
    }
    return len(longest)
}

So this is the right track — the map to manage the characters we’ve seen is definitely right, but there has to be a trick here — and that is to use the index of the character as the hash value, and use that to help us manage the sliding window.

func lengthOfLongestSubstring(s string) int {
    if s == "" {
        return 0
    }

    m := make(map[byte]int)
    bs := []byte(s)
    
    start := 0
    maxLen := 0
    currentLen := 0

    // Iterate through the string and slide the window each
    // time you find a duplicate.
	// end is the end of current substring
	for end := 0; end < len(bs); end++ {
		// Check if the current character is a duplicate
		if dupIndex, ok := m[bs[end]]; ok {
            // New max?
            if maxLen < end - start {
                maxLen = end - start
            }
			// forget index for all characters before the dup
			m = make(map[byte]int)

			// Slide the window to just after the dup.
			start = dupIndex + 1
            currentLen = 0
            end = start
		}
		// Add the current character to hashmap
		m[bs[end]] = end
        currentLen++
	}
    if currentLen > maxLen {
        maxLen = currentLen
    }
    return maxLen
}

That’s a little better, but it’s still pretty slow. 13th percentile runtime, 20th percentile CPU. (If you use strings, the percentiles drop to 5 and 5. No better than the dumb solution!) Looking at the solutions, it is possible to do the whole thing with indexes…but I definitely do not grok it.

func lengthOfLongestSubstring(s string) int {
    if s == "" {
        return 0
    }
    if len(s) == 1 {
        return 1
    }

    // brackets current longest found
    startMax := 0
    endMax := 0

    // start of window
    start := 0
    dupIndex := 0

     // Scan end pointer of window forward
    for end := 1; end < len(s); end++ {
        // The "I didn't find it" value
        dupIndex = -1
        // looking for duplicate inside window
        for j := start; j < end; j++ {
            if s[j] == s[end] {
                // Found it
                dupIndex = j
                break
            }
        }
        // Dup found inside window
        if dupIndex >= 0 {
            if (end - start - dupIndex) > (endMax - startMax) {
              startMax = dupIndex
              endMax = end
            }
            start = dupIndex + 1
        } else {
            // no dup in window
            if (end - start + 1) > (endMax - startMax) {
              startMax = start
              endMax = end + 1
            }
        }
    }
    return endMax - startMax
}

71st percentile CPU, 96th percentile memory (memory makes sense, we barely use anything other than indexes into the string). However, the big winner is dynamic programming:

func lengthOfLongestSubstring(s string) int {
    rv := 0
    // array as hash for extra speed and no dynamic memory.
    // create it, mark it all invalid.
    last := [256]int{}
    for i := range last {
        last[i] = -1
    }

    // define min inline.
    min := func(a, b int) int {
        if a < b {
            return a
        }
        return b
    }

    size := 0
    for i, b := range []byte(s) {
        size = min(size + 1, i - last[b])
        last[b] = i
        rv = rv + size - min(rv, size) // max(rv, size)
    }

    return rv
    
}

This is black magic and I do not understand it yet!

I was going to go to bed after the 01 matrix, but I couldn’t resist peeking at the next one.

Given a list of [x,y] coordinates, return the k points closest to the origin, rated “medium”.

I read it over and thought, you just need to compute the distances for each point, sort by distance, and then return the first k items. Am I reading this wrong? Is it harder than this? Am I missing something?

So I read it over again. No, no weird input or output format. No tricky limits or ranges.

Shrug, okay.

// Convenience structure to make it stupid easy.
type point struct {
    x int
    y int
    dist float64
}

func kClosest(points [][]int, k int) [][]int {
    // this seems like I'd sort the points by distance and return the
    // slice containing the right number of items...and yeah, that was it.

    // Create a slice to put the points and distances in.
    distancedPoints := []point{}

    for i := 0; i < len(points); i++ {
        // stash coordinates
        p := point{
            x: points[i][0],
            y: points[i][1],
        }
        // Go's sqrt wants float64, so convert the coords to float64
        // and do the calculation that's even spelled out in the description.
        X := float64(p.x)
        Y := float64(p.y)
        p.dist = math.Sqrt(X * X + Y * Y)
        // All saved together.
        distancedPoints = append(distancedPoints, p)
    }

    // sorted in increasing distance...
    sort.Slice(distancedPoints, 
        func(i, j int) bool {
            return distancedPoints[i].dist < distancedPoints[j].dist
        })

    // Take the first k points. We create and return the new slice
    // because we have to get rid of the distance.
    out := [][]int{}
    for i := 0; i <= k-1; i++ {
        out = append(out, []int{distancedPoints[i].x, distancedPoints[i].y})
    }

    // That was it. Really.
    return out
}

I really don’t get why this is a medium. I did have to check the type on Math.sqrt, and double-check the sort.Slice syntax, but in terms of what to do, this is not a medium.

97th percentile on speed, 28th on memory. I suppose you could sort parallel arrays for memory savings, but why bother? It just makes the code more complex without adding a lot of value.

Transform an arbitrarily-shaped rectangular matrix of zeroes and ones into a map of distance from the nearest zero, using a NSEW neighborhood.

My first impulse was, “oh, find all the zeroes, and then work out from each one to fix the ones”, followed by “that sounds hard to do, I’ll try the recurrence relation and caching”. The recurrence relation and caching simply does not work because the problem space is too big. Even with caching the intermediate results I still ran out of memory because the neighborhood processing recursed 4x for every cell.

So I went back to the original thought. I didn’t get much of anywhere until I got the hint that what I needed to do was mark all the 1’s as 999999 (an impossible value, marking them as “not done”). After that, I pulled up my memories of linked list processing and it was actually kind of easy! We start up by creating a QueueItem struct:

type QueueItem struct {
    i int,
    j int,
    next *QueueItem
}

Now we need to find all the zeros and add them to the queue, and mark all the ones as “not done yet”. In hindsight, I should have made the queue operations at least a separate sub — mostly because there was a test case with a single zero, which messed up my enqueue/dequeue logic! — but they’re inline:

    m := len(mat)
    n := len(mat[0])

    for i := 0; i < m; i++ {
        for j := 0; j < n; j++ {
            if mat[i][j] == 0 {
                item := &queueItem{
                    i: i,
                    j: j,
                    next: queue,
                }
                if queue == nil {
                    // tail stays here while we add to the head
                    tail = item
                }
                queue = item
            } else {
                mat[i][j] = 9999999
            }
        }
    }

Pretty straightforward — now we have all the zeroes on the queue, and can work our way out from each one. The 999999 is a value guaranteed greater than the largest possible real value in the map. Now all we have to do is process the neighborhoods around each zero; if we find a cell we need to update, we know from the definition of the problem that these are one away from the cell we took off the queue, so for the zero cells, these will all be ones. We set the value in the matrix, and then we add the cell we just processed to the tail of the queue. This way we process all the zeroes, then all the ones, and so on until we’ve processed every cell that needs it. When that finishes, we’re done and can return the updated matrix.

    neighborhood := [][]int{ {1,0}, {-1,0}, {0, -1}, {0, 1}}

    for queue != nil {
        // next cell in priority popped off the quque
        cell := queue
        queue = queue.next
        for _, dir := range neighborhood {
            // Figure out the next cell in the neighborhood of the one we
            // pulled off the queue.
            offsetI, offsetJ := cell.i + dir[0], cell.j + dir[1]

            // If we're in bounds...
            if offsetI >= 0 && offsetI < m && offsetJ >= 0 && offsetJ < n {
                // ...and the cell we're looking at has a value > the current
                // cell + 1, update it to the value of the current cell + 1
                // and queue it. Note that neighbors closer to a zero or 
                // zeroes themselves will be fail the test and will be
                // ignored; they're already done.
                if mat[offsetI][offsetJ] > mat[cell.i][cell.j] + 1 {
                    newCell := &queueItem{
                        i: offsetI,
                        j: offsetJ,
                    }
                    // This is the special case that bit me: it's possible
                    // that the queue emptied. Reset the head and tail 
                    // pointers to the cell we're adding.
                    if queue == nil {
                        queue = newCell
                        tail = newCell
                    } else {
                        // Didn't empty, so add to the tail. This guarantees
                        // that we process all items closer to a zero first.
                        tail.next = newCell 
                        tail = tail.next
                    }
                    // This updates the cell to the right distance and
                    // removes the "unprocessed" marker.                    
                    mat[offsetI][offsetJ] = mat[cell.i][cell.j] + 1
                }
            }
        }
    }

    // when we arrive here there are no more queue items to process,
    // we've marked all the cells and are done.
    return mat

So that’s it! The priority queue is stupid easy to do after the other list exercises, and I should have trusted my initial idea and tried it first. The problem I got at Bloomberg was actually a variation on this, and I could solve it fairly fast now.

I will have to figure out dynamic programming at some point, but this problem didn’t need it.

I chose completely the wrong way to solve this problem.

Instead of eliminating all the elements that could not contain the inserted interval, I attempted to simultaneously inspect each interval while retaining the only interval and composing a new one. This led to deeply nested conditionals, and numerous flags attempting to track if I’d finished composing a new interval, had I inserted it, and so on.

This problem is really about figuring out the elements that cannot contribute to the merged interval, and skipping over them as fast as possible. So we have two assertions:

  • The element strictly occurs before the new interval (end of the interval is < start of new interval)
  • The element strictly occurs after the new interval (beginning of the interval is > end of the new interval)

With those in mind, the only thing we really have to do is collapse any intervals that start on or after the start of the new interval and end before or at the end of it. To collapse those intervals with the new interval, we simply have to keep taking the minimum of the start of the old or new interval as the start of the inserted interval, and the max of the end of the old or new interval as the end of the new one to stretch the new interval lower and higher as needed to fit.

Once the new stretched interval is computed (remember, because we continue collapsing until the next interval (if there is one) is strictly after the new interval, which is will always be if the end of the stretched interval is < the start of the next one), we can simply insert the stretched interval!

The finish appends the rest of the remaining items to the output, and then returns it.

func insert(intervals [][]int, newInterval []int) [][]int {
    // Interval belongs somewhere in the list.
    // Look for insert point for new interval
    output := [][]int{}
    
    // Assertion: if interval[i][1] < newInterval[0], this interval 
    //            occurs before the new interval.
    // I is NOT in the for so its value hangs around after the loop ends.
    i := 0
    for ; i < len(intervals) && intervals[i][1] < newInterval[0]; i++ {
        output = append(output, intervals[i])
    }

    // At this point, the last interval in output is before or at the 
    // start of newinterval. collapse intervals until we find one that 
    // starts before thend of newinterval. i still points to the last
    // interval shifted.
    for ; i < len(intervals) && intervals[i][0] <= newInterval[1]; i++ {
        // adjust the start of the new interval to the earliest point
        if intervals[i][0] < newInterval[0] {
            newInterval[0] = intervals[i][0]
        }
        if newInterval[1] < intervals[i][1] {
            newInterval[1] = intervals[i][1]
        }
    }

    // add the stretched interval
    output = append(output, newInterval)

    // Copy anything left.
    for ; i  < len(intervals); i++ {
        output = append(output, intervals[i])
    }
    return output
}

I checked the solutions and found that there’s a function in the sort package named sort.Search that lets you construct an anonymous boolean function that is run over a sorted array to find the first item that meets the criterion specified by the function. So we can eliminate all of the explicit loops and write it like this:

func insert(intervals [][]int, newInterval []int) [][]int {
    n := len(intervals)

    // Find first interval starting strictly before the new one
    // so we can extract it with a slice. i-1 will be the first one
    // that starts at a value <= the start of the new interval,
    i := sort.Search(n, func(i int) bool {
       return intervals[i][0] > newInterval[0] 
    })

    // Find all intervals ending strictly after this one, same thing.
    // j will be the first one definitely ending after the end of the
    // new interval.
    j := sort.Search(n, func(j int) bool {
       return intervals[j][1] > newInterval[1]
    })

    // Collapse intervals. If we're inside the array, and
    // the LAST interval with a start <= the start of the
    // new interval has a start that is <= the end of that
    // interval, they overlap. Stretch the bottom of the new
    // interval down and delete the overlapped interval.
    if i-1 >= 0 && newInterval[0] <= intervals[i-1][1] {
        newInterval[0] = intervals[i-1][0]
        i--
    }

    // If we're inside the array, and the FIRST interval with
    // an end AFTER the end of the new interval has a start <=
    // the end of the new interval, then they overlap. Stretch the
    // top of the new interval up and discard the overlapped interval.
    if j < n && newInterval[1] >= intervals[j][0] {
        newInterval[1] = intervals[j][1]
        j++
    }

    return append(
        // the slice up to the last interval strictly ending before
        // the start of the new interval
        intervals[:i],            
        append(
            // the new, possibly stretched interval
            [][]int{newInterval}, 
            // the slice from the first interval strictly starting
            // after the end of the new interval
            intervals[j:]...)...)
}

Internally, sort.Search does a binary search for the criterion function, so it’s very efficient. (And I should go back to the binary search questions and see if I can solve them more easily with sort.Search!) However, thinking through the conditions is a bit of a beast. Unless you have very large arrays to work on, the triple for loop may be easier to code on the spot and get right. Given the major improvement in speed, it’s worth practicing with!

The sort.Search solution is very good: 94th percentile on speed, 92nd percentile on memory. Not surprising, as we’ve swapped the linear iteration over the intervals to a binary search, and we only compose the output array at the end with two appends.

Interestingly, the linear recomposition three-loop version is quite fast too! 95th percentile, though the memory usage is only about the 50th percentile. I’d say stick with that solution unless there’s a definite case for the harder-to-code one.

Lessons for me on this first “medium” problem, that I didn’t finish at all quickly? Spend more time thinking about how to eliminate data from the process. What is the fastest way to do that?

My “sort it into place and track everything” solution was too complicated by far. If the logic to iterate over an array starts getting deeply nested, you are doing it wrong.

First medium problem, and another IYKNK solution.

Given an array of numbers, search it for the subarray that has the maximum sum, and return the sum.

leetcode rule of thumb: any problem that has a solution with someone’s name on it is not an obvious solution. In this case, it’s Kadane’s method, which is simple but not intuitive enough that one would immediately stumble on it while trying to solve the problem.

A naive solution brute-force sums the subarrays, looking for the best answer; this is O(N^2). Kadane’s algorithm is O(n), which is pretty impressive, and hearkens back to the tree diameter solution: while working on the subproblems, you maintain a global maximum and compare further work to it.

func maxSubArray(nums []int) int {
    localMax := 0
    globalMax := -99999
    for i := 0; i < len(nums); i++ {
        if nums[i] > nums[i] + localMax {
            localMax = nums[i]
        } else {
            localMax = nums[i] + localMax
        }
        if localMax > globalMax {
            globalMax = localMax
        }
    }
    return globalMax
}

We start with a guaranteed-to-be-wrong answer for the global maximum, then start at the beginning of the array. If the next number along is > than the local maximum sum so far, then it’s the new global maximum. Otherwise, add it to the local maximum. If the local maximum is now > the global one, we’ve found a new global max subarray sum, and we save it.

My original version came in disturbingly low in the runtime list, like 5%. I looked at it and said, “this is too simple for me to have gotten wrong, what’s the deal?”, and then I realized that this is another hyperoptimization hack probably unique to Go: Go has no builtin max() function, so I had coded one, adding another function call to each iteration.

Since the implementation is so minimal as it is, that function call suddenly becomes very significant. Just swapping it over to an if-else immediately put me at 100%.

Which goes to show you that the runtime as a percentage of the “best” runtime is, at best, kind of bullshit. However, I learned another trick to game leetcode, so I suppose that’s something.

First: other projects popped their heads in: setting up a friend’s Etsy store, an announcement from Apple that I had 30 days to update an app before it was pulled, and the holidays. So I really haven’t touched the challenge since the 16th, or about ten days ago, and finishing all 75 problems in a month was already quite the challenge.

The store is up and has had some orders already, and the app has been brought forward from Swift 3 to Swift 5, and updated to Xcode 15 (which ChatGPT cheerily insists doesn’t exist unless you pay money for its dubious advice at the ChatGPT 4 level). So let’s get back into the challenges.

Another tree problem; this one requires us to find the longest leaf-to-leaf path in the tree, not necessarily going through the root. I struggled with this one a while, but did not hit on the need to have the current largest diameter outside of crawl; I eventually gave up and Googled a solution, which passed a pointer to the diameter into the crawler, so we always were looking at the absolute maximum across the whole tree.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func diameterOfBinaryTree(root *TreeNode) int {
    diameter := 0;
    _ = crawl(root, &diameter);
    return diameter;
};

func crawl(root *TreeNode, diameter *int) int {
        if root == nil {
            return 0
        }
        leftSubtreeDepth := crawl(root.Left, diameter)
        rightSubtreeDepth := crawl(root.Right, diameter)
        *diameter = max(*diameter, leftSubtreeDepth + rightSubtreeDepth)
        return max(leftSubtreeDepth, rightSubtreeDepth) + 1
}

func max(x, y int) int {
    if x > y {
        return x
    }
    return y
}

This was 100% on speed, and 20% on memory.

I was close; I had the nil root case, and I’d figured out I needed the depths of the trees, but I didn’t put together that I’d need to cache the diameter completely outside the crawler to accurately find the maximum – if you just pass it back recursively, a larger value can end up discarded.

So let’s take a breath; we’re just past the midpoint, and I don’t think I’ll finish the whole set of exercises this month, even though I’ve ripped through 11 today.

I do feel like I’m learning things; some of them are generally useful, and others are “okay, you could do that if you really had to squeeze every microsecond out”.

  • The old Perl rule of thumb “if someone’s asking you to count things, start with a hash” has generally paid off well. There are a lot of tricky choices in this, but the following rules of thumb seem to be true:
    • If you are coding a for loop, putting the condition in the for seems to run fastest. It’s definitely preferable to a bare for{... with a conditional break inside.
    • Using range will often give you better CPU time at the expense of more memory.
    • Making two extra function calls on the nil subtrees of a leaf to a function that immediately drops out on the null case is better than having a leaf case inline.
  • I need to brush up on my tree-crawling algorithms. I know them well enough to code them and not break things, but the upcoming “diameter of a binary tree” problem is going to be one I have trouble with, similar to the trouble I had with lowest common ancestor and balanced tree: I simply am not comfortable enough thinking about trees to see the solutions easily; sometimes I’ve having trouble getting the problem statement clear enough to solve the problem.
  • Also need more practice with binary searches. I struggled a lot with the first binary search, but did a lot better with the second one in minimum bad version.
  • I have learned some interesting new things:
    • The Euler tour for transforming the lowest common ancestor problem into a maximum-in-range problem.
    • The tortoise-and-hare-pointer algorithm for detecting loops in a list turned out to be applicable to finding the middle of a linear list.
    • If a Go problem is mostly linear searching and adding and removing stuff from the front of a list, using a real linked list will often be significantly faster, though more memory-intensive.
    • Dipping a toe into dynamic programming. I definitely need to spend more time with this, as I definitely got a 2-D dynamic programming question at Bloomberg and wasn’t able to solve it, even in Perl.
    • Moore’s candidate algorithm for the majority element. Ridiculously simple…once you’ve seen it.
    • If you just need a thing to indicate that something is there in a map, a map[type]struct{} is much, much faster than any other type.
    • If your problem space is small enough, using an array instead of a hash will get you big speedups and save a ton of memory while still letting you count things by a key; it’s just that the key is now an integer (e.g., anagrams, if the letters are guaranteed to only be ASCII a-z).
    • Converting a byte array to a string is way faster than progressively building up a string by concatenation. If you can precalculate your maximum space requirement for the byte array, you can make it even faster by preallocating with make(), which lets you dynamically choose the initial size for the array.
    • string() on a slice is just as fast as string() on a full array.
    • It’s actually pretty easy to work with byte arrays, and they let you nicely mix numeric and character values for characters. (Though you should limit this to ASCII.)
  • Things I’m pleased about
    • I had less trouble than I expected with 2D arrays (flood fill). Getting the basic stuff right was the hard part, and after that it was straightforward.
    • I’ve gotten the hang (mostly) of passing pointers into Go functions, allowing me to do things like build persistent caches recursively.
    • I had my very own insight on the middle-of-a-linked-list problem that solved it with a fast and slow pointer.
    • I’ve used closures as iterators twice, and both times they significantly simplified the code.
    • I’m feeling better and better about my skill with Go. I’m definitely a very solid intermediate programmer, and I think I will be able to say I’m an advanced one after getting through all these problems. I’m much more comfortable with pointers in Go that I was when I started. The syntax is still a little gnarly.

I have more stuff to do for Shymala’s art business tomorrow, but I hope to get back to things in a few days. I’ve finished week one and most of week two; I’m two exercise short and two days behind, but there were two weeks of pretty much continuous work on the business in there, so maybe I will finish it off this month. We’ll see.

At any rate, I think I’ll feel a lot less anxious doing coding interviews once I’ve gone through this, and that was the point of this project.

Tags:

Given a linked list, find the middle node. For an odd number of nodes, we want node [n/2]+1; for an even number, we want node [n/2].

The brute force method would run the length of the list, count up the number of nodes, and then start over from the beginning, counting off the appropriate number.

However, if we think about what the pointers do (one moves all the way, one moves halfway), we can do both at the same time, and use the “I ran off the end” condition to know when to check the pointer trying to get to halfway.

To do this, we run a fast pointer and a slow one, just like the loop detector tortoise and hare (and I’m patting myself on the back for having that insight), but in this case we use the fact that the lagging pointer moves exactly half as fast to ensure that it’s only gone halfway when the fast pointer runs off the end.

I had a little trouble getting the conditions right, but the final version that hits the right spots is this one:

/**
 * Definition for singly-linked list.
 * type ListNode struct {
 *     Val int
 *     Next *ListNode
 * }
 */
func middleNode(head *ListNode) *ListNode {
    // tortoise and hare solution
    if head == nil {
        return nil
    }

    p := head
    q := head
    // we're not at the end, and there's at least one node past
    // this one, so q.Next.Next to jump forward 2 is valid.
    for q != nil && q.Next != nil {
        q = q.Next.Next
        // p moves forward one so that it will have moved half
        // as far. When q runs off the end, p will be at the halfway
        // point.
        p = p.Next
    }
    return p
}

This is clever in that it comes up with a nice way to combine the two actions, but not “clever” in the sense of “how does this even work”. This feels like a good strong choice rather than a “look how clever I am with character values” fragile one.

Tags:

« Older entries § Newer entries »