Uncategorized

You are currently browsing the archive for the Uncategorized category.

Visit the nodes of a binary tree in level order: all nodes at the root level, left-to-right, all nodes at level 1, left-to-right, etc.

I did try to solve this with slices and boy did I have trouble. I eventually stopped trying to use them and went to a list-oriented solution with a double closure.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

type MyList struct {
    head *MyNode
    tail *MyNode
}

type MyNode struct {
    Val int
    Next *MyNode
}

func levelOrder(root *TreeNode) [][]int {
    // Ideally, I want to end up with an array
    // of arrays of nodes, added in left-to-right order.
    getStash, helper := createClosures()
    helper(root, 0)
    stash := getStash()

    out := [][]int{}
    for i := 0; i < len(stash); i++ {
        out = append(out, []int{})
        thisList := stash[i]
        for scan := thisList.head; scan != nil; scan = scan.Next {
            out[i] = append(out[i], scan.Val)
        }
    }
    return out
}

func createClosures() (func() []MyList, func(*TreeNode, int)) {
    stash := []MyList{}

    var helper func(*TreeNode, int)
    helper = func(root *TreeNode, level int) {
            if root == nil {
                // Nothing to do at this level
                return
            }
            // Current node gets stashed at the end of the list for this level.
            // (*output)[level] is the slice to append to.
            // Add new node to list at this level
            if len(stash) <= level {
                stash = append(stash, MyList{})
            }
            
            n := &MyNode{Val: root.Val}
            if stash[level].head == nil {
                stash[level].head = n
                stash[level].tail = n
            } else {
                stash[level].tail.Next = n
                stash[level].tail = n
            }
 
            // add the left and right subtrees at the next level down
            helper(root.Left, level + 1)
            helper(root.Right, level + 1)
        }
    return func() []MyList { return stash }, helper
}

This solves the problem, but it’s poor on memory (8th percentile) and runtime (26%). On the other hand, the list manipulation worked correctly the first time. Let’s try to speed it up. First let’s try to remove the lists.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func levelOrder(root *TreeNode) [][]int {
    // Ideally, I want to end up with an array
    // of arrays of nodes, added in left-to-right order.
    getStash, helper := createClosures()
    helper(root, 0)
    return getStash()
}

func createClosures() (func() [][]int, func(*TreeNode, int)) {
    // This will be our output structure. As we go down the
    // levels, we add more []int slices to hold the next level
    // of nodes. Because we work left to right, we always add
    // new nodes at a level to the correct slice, and they always
    // are added to the right end of the slice, giving us the
    // output data structure we want.
    stash := [][]int{}

    // We have to declare a named variable to be closed over for
    // a Go anonymous function to be able to call itself.
    var helper func(*TreeNode, int)

    // The real meat of the process.
    helper = func(root *TreeNode, level int) {
            if root == nil {
                // Nothing to do at this level
                return
            }
            // Current node gets stashed at the end of the slice
            //  for this level.
            // stash[level] is the slice to append to.
            if len(stash) <= level {
                // We have never accessed this level. Add a slice
                // so appends will work. You CANNOT just assign to
                // stash[level], because it doesn't exist yet and
                // you'll get an out of bounds error.
                stash = append(stash, []int{})
            }
            // Add this node's value to the end of the list at this
            // level.
            stash[level] = append(stash[level], root.Val)

            // add the left and right subtrees at the next level down.
            // Because we're tracking the level, we always append to the
            // right slice in the stash, which lives outside the recursion.
            helper(root.Left, level + 1)
            helper(root.Right, level + 1)
        }
    // The two functions the main function needs to call. Stash will be
    // exactly the needed data structure when helper finishes.
    return func() [][]int { return stash }, helper
}

That’s a significant improvement; 71st percentile runtime, 24th percentile memory. The problem I had the first time around was not properly managing the slices. To add an element to a slice after the current end, you must use append(), and I needed to do this for each level as I got there. The code to build the output structure from the lists was the learning experience I needed to get that part right in the rewrite.

Still definitely more comfortable manipulating data structures. Let’s try removing the closure and see how that goes; we’ll just let scoping make the stash visible to the helper.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func levelOrder(root *TreeNode) [][]int {
    // Ideally, I want to end up with an array
    // of arrays of nodes, added in left-to-right order.
    stash := [][]int{}

var helper func(*TreeNode, int)
    helper = func(root *TreeNode, level int) {
            if root == nil {
                // Nothing to do at this level
                return
            }
            // Current node gets stashed at the end of the list for this level.
            // stash[level] is the slice to append to.
            // Add new node to list at this level
            if len(stash) <= level {
                stash = append(stash, []int{})
            }
            stash[level] = append(stash[level], root.Val)

            // add the left and right subtrees at the next level down
            helper(root.Left, level + 1)
            helper(root.Right, level + 1)
        }

    helper(root, 0)
    return stash
}

Interestingly, this is actually worse according to leetcode; 68th percentile now (3ms instead of 1ms). Honestly I think we’re in the noise at this point. No change in the memory usage.

I think if I’d gotten this in an interview I’d call the second or third solution good, and the first marginal. The rewrite time from the list version to the working slice version was only a couple minutes, and I never felt like I didn’t grasp the necessities of the problem.

3 sum: given a list of numbers, find all distinct triples that sum to 0. The hint was to transform this to a 2 sum problem, and look for the 2 sum that solves the problem for the current index.

My initial try spent a lot of effort trying to track the used values in maps, and I did not learn my lesson: when the solution feels too hard, you’re probably doing it wrong.

After I ran out of time, I started looking at solutions; I had started trying to sort the answers and see if there were duplicates, and it sort of worked, but failed some test cases in a non-obvious way.

The real attack for this kind of problem starts out by sorting the numbers first. You can then start depending on math identities to help winnow the solutions, eliminate duplicates, and shrink the search space all at once.

Here’s the solution that was clearest, with my comments and variable name adjustments to make the code clearer yet.

func threeSum(nums []int) [][]int {
    // Output slice.
    out := [][]int{}
    // Sort the numbers ascending first.
    sort.Ints(nums)

    var left, right, sum int

    // We're going to search for three terms. We're going to just move
    // left to right for the first one.
    for first, target := range nums {
        // Skip duplicate terms at the start. We always look at nums[0]
        // and start deduping after that.
        if i > 0 && a == nums[i-1] {
            // Same as previous so any solutions using this would be dups.
            // Skip to the next item.
            continue
        }
        // Start with second item just after the first and third at the end.
        second, third = first + 1, len(nums) - 1
        // As long as the second pointer hasn't hit the third, keep looking.
        for second < third {
            // calculate the sum
            sum = target + nums[second] + nums[third]
            if sum == 0 {
                // This is a solution, save it.
                out = append(out, []int{target, nums[second], nums[third]})
                // Skip duplicates of the second term. We increment once to
                // move to the item after the one we're dedeuping.
                second++
                for second < third && nums[second] == nums[second-1] {
                    second++
                }
            } else if sum < 0 {
                // second number is too small to work; third is the largest
                // as-yet-unused value, so the second has to be larger to
                // move the sum closer to zero.
                second++
            } else { // sum is > 0
                // third number is too large to work. The second number is
                // already the smallest possible for the next solution, so
                // we have to reduce the third to move closer to zero.
                third--
            }
        }
    }
    return out
}

This is good clever as opposed to bad clever, as it’s clear how this actually works. Notice how we keep shrinking the search space as we move the first pointer upward, and how the duplicate elimination at the first and second pointers speeds this up even further. We’re still O(N^2), but we’re discarding as much possible duplicate work as we can.

This is the kind of problem that I find quite difficult; I am not very friendly with numbers, despite a lot of programming over the years. I have spent more time chasing control block chains and securing inputs than I have shuffling numbers, and it shows: I have a lot less trouble with linked lists than these numerical problems.

Not feeling great about my chances to pass one of these live at the moment. Maybe it’s just “more practice needed”. I hope it is, anyway.

First solution worked, but was slow and used a lot of memory (5th percentile for both):

func lengthOfLongestSubstring(s string) int {
    if s == "" {
        return 0
    }
    
    // This looks like a sliding window.
    longest := []byte{}
    current := []byte{}
    m := make(map[byte]int)
    bs := []byte(s)
    
    i := 0
    scan := 0

    for {
        // scan from current point.
        m[bs[scan]]++
        if m[bs[scan]] < 2 {
            // still no repeat, now add it
            current = append(current, bs[scan])
            scan++
            if scan == len(bs) {
                // no more chars to add, current finished
                // and string finished
                break
            }
        } else {
            // repeat; restarting scan at next point
            // for a new possible substring, see if 
            // the new substring is the new longest.
            if len(current) > len(longest) {
                longest = current
            }
            // in either case, we have to try for a new
            // non-repeater in the next window
            i++
            if i == len(bs) {
                // no new window left, finish up
                break
            }
            scan = i
            current = []byte{}
            m = make(map[byte]int)
        }
    }
    if len(current) > len(longest) {
        longest = current
    }
    return len(longest)
}

So this is the right track — the map to manage the characters we’ve seen is definitely right, but there has to be a trick here — and that is to use the index of the character as the hash value, and use that to help us manage the sliding window.

func lengthOfLongestSubstring(s string) int {
    if s == "" {
        return 0
    }

    m := make(map[byte]int)
    bs := []byte(s)
    
    start := 0
    maxLen := 0
    currentLen := 0

    // Iterate through the string and slide the window each
    // time you find a duplicate.
	// end is the end of current substring
	for end := 0; end < len(bs); end++ {
		// Check if the current character is a duplicate
		if dupIndex, ok := m[bs[end]]; ok {
            // New max?
            if maxLen < end - start {
                maxLen = end - start
            }
			// forget index for all characters before the dup
			m = make(map[byte]int)

			// Slide the window to just after the dup.
			start = dupIndex + 1
            currentLen = 0
            end = start
		}
		// Add the current character to hashmap
		m[bs[end]] = end
        currentLen++
	}
    if currentLen > maxLen {
        maxLen = currentLen
    }
    return maxLen
}

That’s a little better, but it’s still pretty slow. 13th percentile runtime, 20th percentile CPU. (If you use strings, the percentiles drop to 5 and 5. No better than the dumb solution!) Looking at the solutions, it is possible to do the whole thing with indexes…but I definitely do not grok it.

func lengthOfLongestSubstring(s string) int {
    if s == "" {
        return 0
    }
    if len(s) == 1 {
        return 1
    }

    // brackets current longest found
    startMax := 0
    endMax := 0

    // start of window
    start := 0
    dupIndex := 0

     // Scan end pointer of window forward
    for end := 1; end < len(s); end++ {
        // The "I didn't find it" value
        dupIndex = -1
        // looking for duplicate inside window
        for j := start; j < end; j++ {
            if s[j] == s[end] {
                // Found it
                dupIndex = j
                break
            }
        }
        // Dup found inside window
        if dupIndex >= 0 {
            if (end - start - dupIndex) > (endMax - startMax) {
              startMax = dupIndex
              endMax = end
            }
            start = dupIndex + 1
        } else {
            // no dup in window
            if (end - start + 1) > (endMax - startMax) {
              startMax = start
              endMax = end + 1
            }
        }
    }
    return endMax - startMax
}

71st percentile CPU, 96th percentile memory (memory makes sense, we barely use anything other than indexes into the string). However, the big winner is dynamic programming:

func lengthOfLongestSubstring(s string) int {
    rv := 0
    // array as hash for extra speed and no dynamic memory.
    // create it, mark it all invalid.
    last := [256]int{}
    for i := range last {
        last[i] = -1
    }

    // define min inline.
    min := func(a, b int) int {
        if a < b {
            return a
        }
        return b
    }

    size := 0
    for i, b := range []byte(s) {
        size = min(size + 1, i - last[b])
        last[b] = i
        rv = rv + size - min(rv, size) // max(rv, size)
    }

    return rv
    
}

This is black magic and I do not understand it yet!

I was going to go to bed after the 01 matrix, but I couldn’t resist peeking at the next one.

Given a list of [x,y] coordinates, return the k points closest to the origin, rated “medium”.

I read it over and thought, you just need to compute the distances for each point, sort by distance, and then return the first k items. Am I reading this wrong? Is it harder than this? Am I missing something?

So I read it over again. No, no weird input or output format. No tricky limits or ranges.

Shrug, okay.

// Convenience structure to make it stupid easy.
type point struct {
    x int
    y int
    dist float64
}

func kClosest(points [][]int, k int) [][]int {
    // this seems like I'd sort the points by distance and return the
    // slice containing the right number of items...and yeah, that was it.

    // Create a slice to put the points and distances in.
    distancedPoints := []point{}

    for i := 0; i < len(points); i++ {
        // stash coordinates
        p := point{
            x: points[i][0],
            y: points[i][1],
        }
        // Go's sqrt wants float64, so convert the coords to float64
        // and do the calculation that's even spelled out in the description.
        X := float64(p.x)
        Y := float64(p.y)
        p.dist = math.Sqrt(X * X + Y * Y)
        // All saved together.
        distancedPoints = append(distancedPoints, p)
    }

    // sorted in increasing distance...
    sort.Slice(distancedPoints, 
        func(i, j int) bool {
            return distancedPoints[i].dist < distancedPoints[j].dist
        })

    // Take the first k points. We create and return the new slice
    // because we have to get rid of the distance.
    out := [][]int{}
    for i := 0; i <= k-1; i++ {
        out = append(out, []int{distancedPoints[i].x, distancedPoints[i].y})
    }

    // That was it. Really.
    return out
}

I really don’t get why this is a medium. I did have to check the type on Math.sqrt, and double-check the sort.Slice syntax, but in terms of what to do, this is not a medium.

97th percentile on speed, 28th on memory. I suppose you could sort parallel arrays for memory savings, but why bother? It just makes the code more complex without adding a lot of value.

Transform an arbitrarily-shaped rectangular matrix of zeroes and ones into a map of distance from the nearest zero, using a NSEW neighborhood.

My first impulse was, “oh, find all the zeroes, and then work out from each one to fix the ones”, followed by “that sounds hard to do, I’ll try the recurrence relation and caching”. The recurrence relation and caching simply does not work because the problem space is too big. Even with caching the intermediate results I still ran out of memory because the neighborhood processing recursed 4x for every cell.

So I went back to the original thought. I didn’t get much of anywhere until I got the hint that what I needed to do was mark all the 1’s as 999999 (an impossible value, marking them as “not done”). After that, I pulled up my memories of linked list processing and it was actually kind of easy! We start up by creating a QueueItem struct:

type QueueItem struct {
    i int,
    j int,
    next *QueueItem
}

Now we need to find all the zeros and add them to the queue, and mark all the ones as “not done yet”. In hindsight, I should have made the queue operations at least a separate sub — mostly because there was a test case with a single zero, which messed up my enqueue/dequeue logic! — but they’re inline:

    m := len(mat)
    n := len(mat[0])

    for i := 0; i < m; i++ {
        for j := 0; j < n; j++ {
            if mat[i][j] == 0 {
                item := &queueItem{
                    i: i,
                    j: j,
                    next: queue,
                }
                if queue == nil {
                    // tail stays here while we add to the head
                    tail = item
                }
                queue = item
            } else {
                mat[i][j] = 9999999
            }
        }
    }

Pretty straightforward — now we have all the zeroes on the queue, and can work our way out from each one. The 999999 is a value guaranteed greater than the largest possible real value in the map. Now all we have to do is process the neighborhoods around each zero; if we find a cell we need to update, we know from the definition of the problem that these are one away from the cell we took off the queue, so for the zero cells, these will all be ones. We set the value in the matrix, and then we add the cell we just processed to the tail of the queue. This way we process all the zeroes, then all the ones, and so on until we’ve processed every cell that needs it. When that finishes, we’re done and can return the updated matrix.

    neighborhood := [][]int{ {1,0}, {-1,0}, {0, -1}, {0, 1}}

    for queue != nil {
        // next cell in priority popped off the quque
        cell := queue
        queue = queue.next
        for _, dir := range neighborhood {
            // Figure out the next cell in the neighborhood of the one we
            // pulled off the queue.
            offsetI, offsetJ := cell.i + dir[0], cell.j + dir[1]

            // If we're in bounds...
            if offsetI >= 0 && offsetI < m && offsetJ >= 0 && offsetJ < n {
                // ...and the cell we're looking at has a value > the current
                // cell + 1, update it to the value of the current cell + 1
                // and queue it. Note that neighbors closer to a zero or 
                // zeroes themselves will be fail the test and will be
                // ignored; they're already done.
                if mat[offsetI][offsetJ] > mat[cell.i][cell.j] + 1 {
                    newCell := &queueItem{
                        i: offsetI,
                        j: offsetJ,
                    }
                    // This is the special case that bit me: it's possible
                    // that the queue emptied. Reset the head and tail 
                    // pointers to the cell we're adding.
                    if queue == nil {
                        queue = newCell
                        tail = newCell
                    } else {
                        // Didn't empty, so add to the tail. This guarantees
                        // that we process all items closer to a zero first.
                        tail.next = newCell 
                        tail = tail.next
                    }
                    // This updates the cell to the right distance and
                    // removes the "unprocessed" marker.                    
                    mat[offsetI][offsetJ] = mat[cell.i][cell.j] + 1
                }
            }
        }
    }

    // when we arrive here there are no more queue items to process,
    // we've marked all the cells and are done.
    return mat

So that’s it! The priority queue is stupid easy to do after the other list exercises, and I should have trusted my initial idea and tried it first. The problem I got at Bloomberg was actually a variation on this, and I could solve it fairly fast now.

I will have to figure out dynamic programming at some point, but this problem didn’t need it.

I chose completely the wrong way to solve this problem.

Instead of eliminating all the elements that could not contain the inserted interval, I attempted to simultaneously inspect each interval while retaining the only interval and composing a new one. This led to deeply nested conditionals, and numerous flags attempting to track if I’d finished composing a new interval, had I inserted it, and so on.

This problem is really about figuring out the elements that cannot contribute to the merged interval, and skipping over them as fast as possible. So we have two assertions:

  • The element strictly occurs before the new interval (end of the interval is < start of new interval)
  • The element strictly occurs after the new interval (beginning of the interval is > end of the new interval)

With those in mind, the only thing we really have to do is collapse any intervals that start on or after the start of the new interval and end before or at the end of it. To collapse those intervals with the new interval, we simply have to keep taking the minimum of the start of the old or new interval as the start of the inserted interval, and the max of the end of the old or new interval as the end of the new one to stretch the new interval lower and higher as needed to fit.

Once the new stretched interval is computed (remember, because we continue collapsing until the next interval (if there is one) is strictly after the new interval, which is will always be if the end of the stretched interval is < the start of the next one), we can simply insert the stretched interval!

The finish appends the rest of the remaining items to the output, and then returns it.

func insert(intervals [][]int, newInterval []int) [][]int {
    // Interval belongs somewhere in the list.
    // Look for insert point for new interval
    output := [][]int{}
    
    // Assertion: if interval[i][1] < newInterval[0], this interval 
    //            occurs before the new interval.
    // I is NOT in the for so its value hangs around after the loop ends.
    i := 0
    for ; i < len(intervals) && intervals[i][1] < newInterval[0]; i++ {
        output = append(output, intervals[i])
    }

    // At this point, the last interval in output is before or at the 
    // start of newinterval. collapse intervals until we find one that 
    // starts before thend of newinterval. i still points to the last
    // interval shifted.
    for ; i < len(intervals) && intervals[i][0] <= newInterval[1]; i++ {
        // adjust the start of the new interval to the earliest point
        if intervals[i][0] < newInterval[0] {
            newInterval[0] = intervals[i][0]
        }
        if newInterval[1] < intervals[i][1] {
            newInterval[1] = intervals[i][1]
        }
    }

    // add the stretched interval
    output = append(output, newInterval)

    // Copy anything left.
    for ; i  < len(intervals); i++ {
        output = append(output, intervals[i])
    }
    return output
}

I checked the solutions and found that there’s a function in the sort package named sort.Search that lets you construct an anonymous boolean function that is run over a sorted array to find the first item that meets the criterion specified by the function. So we can eliminate all of the explicit loops and write it like this:

func insert(intervals [][]int, newInterval []int) [][]int {
    n := len(intervals)

    // Find first interval starting strictly before the new one
    // so we can extract it with a slice. i-1 will be the first one
    // that starts at a value <= the start of the new interval,
    i := sort.Search(n, func(i int) bool {
       return intervals[i][0] > newInterval[0] 
    })

    // Find all intervals ending strictly after this one, same thing.
    // j will be the first one definitely ending after the end of the
    // new interval.
    j := sort.Search(n, func(j int) bool {
       return intervals[j][1] > newInterval[1]
    })

    // Collapse intervals. If we're inside the array, and
    // the LAST interval with a start <= the start of the
    // new interval has a start that is <= the end of that
    // interval, they overlap. Stretch the bottom of the new
    // interval down and delete the overlapped interval.
    if i-1 >= 0 && newInterval[0] <= intervals[i-1][1] {
        newInterval[0] = intervals[i-1][0]
        i--
    }

    // If we're inside the array, and the FIRST interval with
    // an end AFTER the end of the new interval has a start <=
    // the end of the new interval, then they overlap. Stretch the
    // top of the new interval up and discard the overlapped interval.
    if j < n && newInterval[1] >= intervals[j][0] {
        newInterval[1] = intervals[j][1]
        j++
    }

    return append(
        // the slice up to the last interval strictly ending before
        // the start of the new interval
        intervals[:i],            
        append(
            // the new, possibly stretched interval
            [][]int{newInterval}, 
            // the slice from the first interval strictly starting
            // after the end of the new interval
            intervals[j:]...)...)
}

Internally, sort.Search does a binary search for the criterion function, so it’s very efficient. (And I should go back to the binary search questions and see if I can solve them more easily with sort.Search!) However, thinking through the conditions is a bit of a beast. Unless you have very large arrays to work on, the triple for loop may be easier to code on the spot and get right. Given the major improvement in speed, it’s worth practicing with!

The sort.Search solution is very good: 94th percentile on speed, 92nd percentile on memory. Not surprising, as we’ve swapped the linear iteration over the intervals to a binary search, and we only compose the output array at the end with two appends.

Interestingly, the linear recomposition three-loop version is quite fast too! 95th percentile, though the memory usage is only about the 50th percentile. I’d say stick with that solution unless there’s a definite case for the harder-to-code one.

Lessons for me on this first “medium” problem, that I didn’t finish at all quickly? Spend more time thinking about how to eliminate data from the process. What is the fastest way to do that?

My “sort it into place and track everything” solution was too complicated by far. If the logic to iterate over an array starts getting deeply nested, you are doing it wrong.

First medium problem, and another IYKNK solution.

Given an array of numbers, search it for the subarray that has the maximum sum, and return the sum.

leetcode rule of thumb: any problem that has a solution with someone’s name on it is not an obvious solution. In this case, it’s Kadane’s method, which is simple but not intuitive enough that one would immediately stumble on it while trying to solve the problem.

A naive solution brute-force sums the subarrays, looking for the best answer; this is O(N^2). Kadane’s algorithm is O(n), which is pretty impressive, and hearkens back to the tree diameter solution: while working on the subproblems, you maintain a global maximum and compare further work to it.

func maxSubArray(nums []int) int {
    localMax := 0
    globalMax := -99999
    for i := 0; i < len(nums); i++ {
        if nums[i] > nums[i] + localMax {
            localMax = nums[i]
        } else {
            localMax = nums[i] + localMax
        }
        if localMax > globalMax {
            globalMax = localMax
        }
    }
    return globalMax
}

We start with a guaranteed-to-be-wrong answer for the global maximum, then start at the beginning of the array. If the next number along is > than the local maximum sum so far, then it’s the new global maximum. Otherwise, add it to the local maximum. If the local maximum is now > the global one, we’ve found a new global max subarray sum, and we save it.

My original version came in disturbingly low in the runtime list, like 5%. I looked at it and said, “this is too simple for me to have gotten wrong, what’s the deal?”, and then I realized that this is another hyperoptimization hack probably unique to Go: Go has no builtin max() function, so I had coded one, adding another function call to each iteration.

Since the implementation is so minimal as it is, that function call suddenly becomes very significant. Just swapping it over to an if-else immediately put me at 100%.

Which goes to show you that the runtime as a percentage of the “best” runtime is, at best, kind of bullshit. However, I learned another trick to game leetcode, so I suppose that’s something.

First: other projects popped their heads in: setting up a friend’s Etsy store, an announcement from Apple that I had 30 days to update an app before it was pulled, and the holidays. So I really haven’t touched the challenge since the 16th, or about ten days ago, and finishing all 75 problems in a month was already quite the challenge.

The store is up and has had some orders already, and the app has been brought forward from Swift 3 to Swift 5, and updated to Xcode 15 (which ChatGPT cheerily insists doesn’t exist unless you pay money for its dubious advice at the ChatGPT 4 level). So let’s get back into the challenges.

Another tree problem; this one requires us to find the longest leaf-to-leaf path in the tree, not necessarily going through the root. I struggled with this one a while, but did not hit on the need to have the current largest diameter outside of crawl; I eventually gave up and Googled a solution, which passed a pointer to the diameter into the crawler, so we always were looking at the absolute maximum across the whole tree.

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val int
 *     Left *TreeNode
 *     Right *TreeNode
 * }
 */

func diameterOfBinaryTree(root *TreeNode) int {
    diameter := 0;
    _ = crawl(root, &diameter);
    return diameter;
};

func crawl(root *TreeNode, diameter *int) int {
        if root == nil {
            return 0
        }
        leftSubtreeDepth := crawl(root.Left, diameter)
        rightSubtreeDepth := crawl(root.Right, diameter)
        *diameter = max(*diameter, leftSubtreeDepth + rightSubtreeDepth)
        return max(leftSubtreeDepth, rightSubtreeDepth) + 1
}

func max(x, y int) int {
    if x > y {
        return x
    }
    return y
}

This was 100% on speed, and 20% on memory.

I was close; I had the nil root case, and I’d figured out I needed the depths of the trees, but I didn’t put together that I’d need to cache the diameter completely outside the crawler to accurately find the maximum – if you just pass it back recursively, a larger value can end up discarded.

This is a tougher one, and honestly, you would have needed to have solved this yourself before to have gotten to a reasonable algorithm in the usual 1/2 hour time. It’s generally a significant sign if the problem description has to refer to Wikipedia for an accurate description of the problem.

This is one that is very simple to do by inspection, but difficult to do simply by algorithm, let alone an efficient one.

My first idea was that I’d need to know the paths to the nodes, and that I’d *handwave* figure out the common ancestor by using those. The good algorithm for this kind of uses the idea of paths to the nodes, but instead it does an in-order trace of the paths through the tree – an Euler tour, dude comes in everywhere – and that allows us to transform the problem into a range-minimum query (e.g., when I have node P and node q, and the path I needed to get from one to the other, at some point in there I’ll have a minimum-depth node, which will either be one of the two, or a third higher node).

The Wikipedia example shows this as an illustration:

To spell this out more explicitly: you can see we did the in-order tour, transforming the tree into a linear walk through it: A, down to B, down to C, back up to B, down to D, down to E, up to D, and so on. We record the node value and the depth at each step. To get the range query, we want the first appearance of each of two nodes we’re searching for (and we can record that as we’re crawling), and then to look for the highest node in the range of nodes including both of them. For the example in the diagram, step 6 is where we find the first node, E, and step 9 is where we find the second node, F. Iterating from step 6 to step 9, the node with the lowest depth in that range is B, and we can see from inspecting the tree that this is correct.

So.

At this point I’ve explained the solution, but have not written any code for it at all yet. Better go get that done.

Note: I’ve gotten the Euler crawl to mostly work, but I’m having trouble with the returned data from the crawl. Debugging…and yes, I am too cheap to pay for leetcode’s debugger. fmt.Println FTW.

The crawler crawls, but the output is wrong. I’ve tracked this down to a fundamental error on my part: Go arrays are not built for this kind of thing. I am switching the crawler over to building a linked list, which I think I only have to traverse one direction anyway, so it’s just as good.

This definitely was a failure to time-block. I will keep plugging on this until I actually finish solving it, but this would have been a DQ in an interview.

I’ve now discarded two different Euler crawls, and have a working one…but the range-minimum isn’t working as I expect it to. Will have to come back to it again. I begin to wonder if trying to track the start node is, in fact, a bad idea, and if I should simply convert the linked list I’m generating in the crawl to an array for the range minimum once it’s returned.

The crawl now returns the head and tail of a list that contains the visit order for the Euler crawl: node, left subtree in order, node again, right subtree in order, node again. This breaks down like this:

If the current head is nil, return [nil, nil].

Generate a crawl node for just this node at the current level
Set head and tail to this node

If the current head points to a leaf node:
  Return head and tail.

Otherwise: 
  Crawl the left side at level+1, getting a leftside head and tail.
  Crawl the right side at level+1, getting a rightside head and tail.
  If the leftside crawl is non-null:
     Set tail.next to leftside head.
     Create a new crawl node for the current node at the current level.
     Set leftside tail node's next to this new node.
     Set tail to this middle node.
  If the rightside crawl is non-null:
     Set tail.next to the rightside head.
     Create a new crawl node for the current node at the current level.
     Set rightside tail node's next to this new node.
     Set tail to this new node.
Return head and tail.

This returns a singly-linked list with an in-order traversal, recording every node visited and the tree level at which it was seen. We should now be able to find the start and end nodes, and scan the list between the two inclusive, looking for the highest node, and that should find the LCA.

The scan is a bit of a bastard as well. The nodes may appear in any order, and we have to find the start and end no matter which order they show up in. I initially tried swapping them if they were out of order, but this didn’t work as I expected – that may have been other bugs, but I coded around it in the final scan code.

The final scan first says “I don’t have either node” and skips nodes until it finds one of them. Once it does, it starts looking for the node with the highest level. The tricky bit is that we need to detect that we’ve found the first node, then start checking, and only stop checking after we find the end node (as it may be the highest, since nodes are considered their own roots).

Here’s the final code. It did not do well in the rankings: bottom 5% in speed, and bottom 25% in memory usage. I suspect I could improve it by merging the “find the local high node” code into the Euler crawl, and by not actually recording the nodes in the linked list at all but passing the state around in the recursion.

Lessons learned:

  • Do not try to do complicated array machinations in Go. Better to record the data in a linked list if you’re doing something that needs to recursively append sublists.
  • Think more clearly about recursion. What is the null case? What is the one-element case? What is the more-than-once case? What is the termination criterion for the more-than-one case?
  • Don’t be afraid to discard an old try and do something different. Comment out the failed code (and note to the interviewer that you’d check the broken one in on a local branch in case you wanted to go back to it).
  • Continuing to practice with Go pointer code is necessary. I’m a bit weak with pointers.

This took way too long to do, and way too many shots at getting to something that would work at all. I grade myself a fail on this one. I might come back at a later date and try to make it faster and smaller.

Addendum: removed the debugging and the dump of the crawl output. Even without doing anything else, that bumped my stats to the top 73% in runtime and top 99% in memory usage, and that is a pass with flying colors!

/**
 * Definition for a binary tree node.
 * type TreeNode struct {
 *     Val   int
 *     Left  *TreeNode
 *     Right *TreeNode
 * }
 */
type eulerNode struct {
    node *TreeNode
    depth int
    next *eulerNode
}

func lowestCommonAncestor(root, p, q *TreeNode) *TreeNode {
    // Use the transformation of this problem into a range-minimum one.
    // Crawl the tree in-order, creating an Euler path. Each step records
    // the node value and the depth.
    head, tail := eulerCrawl2(root, p, q, 0)

    // dump crawl output - note that removing this should make the code
    // significantly faster
    scan := head
    for {
        if scan == nil {
            break
        }
        fmt.Println(scan.node.Val, "depth", scan.depth)
        scan = scan.next
    }
    
    fmt.Println("crawl complete", head, tail)
    // Start is the beginning of the range, and the end of the path is the end.
    // Find the path minimum in the range (the lowest depth value).
    var ancestor *TreeNode 
    minDepth := 9999999999
    fmt.Println("Start at", head.node.Val, head.depth)
    
    foundP := false
    foundQ := false
    
    for scan := head; scan != nil; scan = scan.next {
        fmt.Println("node", scan.node.Val, scan.depth)
        if scan.node.Val == p.Val {
            fmt.Println("found p")
            foundP = true
        }
        if scan.node.Val == q.Val {
            fmt.Println("found q")
            foundQ = true
        }
        if !foundP && !foundQ {
            fmt.Println("skip", scan.node.Val)
            continue
        }
        fmt.Println("check", scan.node.Val, scan.depth)
        if scan.depth < minDepth {
            // New range minimum
            minDepth = scan.depth
            ancestor = scan.node
            fmt.Println("new candidate min at", scan.node.Val, scan.depth)
        }
        if foundP && foundQ {
            fmt.Println("found and processed complete range")
            break
        }
    }
    fmt.Println("ancestor is", ancestor.Val)

    return ancestor
}

func eulerCrawl2(root, p, q *TreeNode, depth int) (*eulerNode, *eulerNode) {
    // Nil tree returns nil pointer
    if root == nil {
        fmt.Println("skipping nil pointer")
        return nil, nil
    }
    
    // Non-nil next node. Generate a crawl node for this tree node.
    fmt.Println("adding crawl node for", root.Val)
    head := &eulerNode{
        node: root,
        next: nil,
        depth: depth,
    }
    tail := head
        
    // If this is a leaf, the crawl of 
    // this node is just this node. Head and tal point to the same node.
    if root.Left == nil && root.Right == nil {
        fmt.Println("returning at leaf")
        return head, tail
    }
    
    // Non-leaf. Crawl will be this node, the left subtree at depth+1,
    // another copy of this node at this depth, the right subtree at depth+1,
    // and one more copy of this node.
    fmt.Println("crawling down")
    leftSubtreeHead, leftSubtreeTail := 
        eulerCrawl2(root.Left, p, q, depth+1)
    rightSubtreeHead, rightSubtreeTail :=
        eulerCrawl2(root.Right, p, q, depth+1)
    
    // Chain the results together:
    // - this tree node
    // - the left crawl below it
    // - another copy of this node
    // - the right crawl below it
    // - a final copy of this node
    
    // Link this node to the start of the list from the left crawl
    if leftSubtreeHead != nil {
        // If there is a crawl below, link it in, and add another
        // copy of the current node for coming back up
        tail.next = leftSubtreeHead
        leftSubtreeTail.next = &eulerNode{
            node: root,
            next: nil,
            depth: depth,                
        }
        tail = leftSubtreeTail.next
    }
    
    if rightSubtreeHead != nil {
        // right crawl exists. Add it at the current tail, reset tail.
        tail.next = rightSubtreeHead
        rightSubtreeTail.next = &eulerNode{
            node: root,
            next: nil,
            depth: depth,         
        }
        // reset tail to new last node
        tail = rightSubtreeTail.next
    }
    
    // Return head and tail of the newly-generated list of crawled
    // nodes.
    fmt.Println("returning results at depth", depth)
    return head, // Head of this new list
           tail // Tail of this new list
}

A bit of a break working on a friend’s art business launch; still working on that, but made some time to come back to leetcode month today.

Today’s problem is flood fill; another easy-rated one. This is again a recursion problem, and the tricky bit is as usual preventing extra recursion that doesn’t need to happen.

Doing this one is made a lot easier by unifying the calling sequence with a helper function that adds an

Go-specific issues for me were properly extracting the dimensions of a two-D array and having to do that with a pointer. Since we want to alter the array we’re flood filling, we need to pass a pointer to it so that we can actually modify the source array.

I got tripped up by the termination condition. In addition to “are we outside the bounds of the array”, we also need to check if this cell needs filling at all (is it the fill color already? Is it the color we want to fill?).

I did need a couple runs with debug prints in to spot where my recursion was going wrong; the first cut didn’t have the “is this already the right color” test, and I was able to catch that one with the supplied test cases. It. failed on the submit because I wasn’t checking for “is the fill already done” case.

How did I do relative to other cases? Pretty good! Runtime in the top 92%, memory usage a quarter of a percent short of the 100th percentile. Should be a pass in an interview. (I did get this one at Thumbtack, and came out with a solution similar to this one, except I used an 8-cell neighborhood when a 4-cell will work. Too many Game-of-Life programs.)

func floodFill(image [][]int, sr int, sc int, color int) [][]int {
    eligibleColor := image[sr][sc]
    return *floodFillHelper(&image, sr, sc, color, eligibleColor)
}

func floodFillHelper(image *[][]int, 
                     sr int, sc int, 
                     fillColor int, 
                     eligibleForFill int) *[][] int {
    if sr < 0 || 
       sc < 0 || 
       sr > len(*image) - 1 || 
       sc > len((*image)[0]) - 1 ||
       (*image)[sr][sc] != eligibleForFill ||
       (*image)[sr][sc] == fillColor {
       return image
    }
    
    (*image)[sr][sc] = fillColor
    
    floodFillHelper(image, sr-1, sc, fillColor, eligibleForFill)
    floodFillHelper(image, sr+1, sc, fillColor, eligibleForFill)
    floodFillHelper(image, sr, sc-1, fillColor, eligibleForFill)
    floodFillHelper(image, sr, sc+1, fillColor, eligibleForFill)
    
    return image
}

« Older entries § Newer entries »